Sep 9 21:15:10.766371 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 21:15:10.766391 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 19:54:20 -00 2025 Sep 9 21:15:10.766401 kernel: KASLR enabled Sep 9 21:15:10.766406 kernel: efi: EFI v2.7 by EDK II Sep 9 21:15:10.766412 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 9 21:15:10.766417 kernel: random: crng init done Sep 9 21:15:10.766424 kernel: secureboot: Secure boot disabled Sep 9 21:15:10.766430 kernel: ACPI: Early table checksum verification disabled Sep 9 21:15:10.766436 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 9 21:15:10.766443 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 21:15:10.766449 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:15:10.766455 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:15:10.766461 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:15:10.766467 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:15:10.766474 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:15:10.766482 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:15:10.766488 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:15:10.766494 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:15:10.766500 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 21:15:10.766506 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 21:15:10.766512 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 21:15:10.766518 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 21:15:10.766524 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 9 21:15:10.766531 kernel: Zone ranges: Sep 9 21:15:10.766537 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 21:15:10.766544 kernel: DMA32 empty Sep 9 21:15:10.766580 kernel: Normal empty Sep 9 21:15:10.766588 kernel: Device empty Sep 9 21:15:10.766594 kernel: Movable zone start for each node Sep 9 21:15:10.766600 kernel: Early memory node ranges Sep 9 21:15:10.766606 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 9 21:15:10.766612 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 9 21:15:10.766618 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 9 21:15:10.766624 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 9 21:15:10.766630 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 9 21:15:10.766636 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 9 21:15:10.766642 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 9 21:15:10.766651 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 9 21:15:10.766657 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 9 21:15:10.766663 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 9 21:15:10.766671 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 9 21:15:10.766677 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 9 21:15:10.766684 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 21:15:10.766691 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 21:15:10.766698 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 21:15:10.766704 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 9 21:15:10.766710 kernel: psci: probing for conduit method from ACPI. Sep 9 21:15:10.766716 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 21:15:10.766723 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 21:15:10.766729 kernel: psci: Trusted OS migration not required Sep 9 21:15:10.766735 kernel: psci: SMC Calling Convention v1.1 Sep 9 21:15:10.766742 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 21:15:10.766748 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 21:15:10.766756 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 21:15:10.766762 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 21:15:10.766768 kernel: Detected PIPT I-cache on CPU0 Sep 9 21:15:10.766775 kernel: CPU features: detected: GIC system register CPU interface Sep 9 21:15:10.766781 kernel: CPU features: detected: Spectre-v4 Sep 9 21:15:10.766787 kernel: CPU features: detected: Spectre-BHB Sep 9 21:15:10.766794 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 21:15:10.766800 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 21:15:10.766806 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 21:15:10.766813 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 21:15:10.766819 kernel: alternatives: applying boot alternatives Sep 9 21:15:10.766826 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f5bd02e888bbcae51800cf37660dcdbf356eb05540a834019d706c2521a92d30 Sep 9 21:15:10.766834 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 21:15:10.766841 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 21:15:10.766847 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 21:15:10.766854 kernel: Fallback order for Node 0: 0 Sep 9 21:15:10.766860 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 9 21:15:10.766866 kernel: Policy zone: DMA Sep 9 21:15:10.766872 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 21:15:10.766879 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 9 21:15:10.766885 kernel: software IO TLB: area num 4. Sep 9 21:15:10.766891 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 9 21:15:10.766898 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 9 21:15:10.766905 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 21:15:10.766912 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 21:15:10.766919 kernel: rcu: RCU event tracing is enabled. Sep 9 21:15:10.766925 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 21:15:10.766932 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 21:15:10.766938 kernel: Tracing variant of Tasks RCU enabled. Sep 9 21:15:10.766945 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 21:15:10.766966 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 21:15:10.766972 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 21:15:10.766979 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 21:15:10.766985 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 21:15:10.766993 kernel: GICv3: 256 SPIs implemented Sep 9 21:15:10.766999 kernel: GICv3: 0 Extended SPIs implemented Sep 9 21:15:10.767005 kernel: Root IRQ handler: gic_handle_irq Sep 9 21:15:10.767012 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 21:15:10.767018 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 21:15:10.767024 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 21:15:10.767031 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 21:15:10.767037 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 9 21:15:10.767044 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 9 21:15:10.767050 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 9 21:15:10.767056 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 9 21:15:10.767063 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 21:15:10.767070 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 21:15:10.767077 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 21:15:10.767083 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 21:15:10.767090 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 21:15:10.767096 kernel: arm-pv: using stolen time PV Sep 9 21:15:10.767103 kernel: Console: colour dummy device 80x25 Sep 9 21:15:10.767110 kernel: ACPI: Core revision 20240827 Sep 9 21:15:10.767116 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 21:15:10.767123 kernel: pid_max: default: 32768 minimum: 301 Sep 9 21:15:10.767130 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 21:15:10.767137 kernel: landlock: Up and running. Sep 9 21:15:10.767144 kernel: SELinux: Initializing. Sep 9 21:15:10.767150 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 21:15:10.767157 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 21:15:10.767164 kernel: rcu: Hierarchical SRCU implementation. Sep 9 21:15:10.767170 kernel: rcu: Max phase no-delay instances is 400. Sep 9 21:15:10.767177 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 21:15:10.767184 kernel: Remapping and enabling EFI services. Sep 9 21:15:10.767190 kernel: smp: Bringing up secondary CPUs ... Sep 9 21:15:10.767202 kernel: Detected PIPT I-cache on CPU1 Sep 9 21:15:10.767209 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 21:15:10.767216 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 9 21:15:10.767224 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 21:15:10.767231 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 21:15:10.767238 kernel: Detected PIPT I-cache on CPU2 Sep 9 21:15:10.767245 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 21:15:10.767252 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 9 21:15:10.767260 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 21:15:10.767267 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 21:15:10.767274 kernel: Detected PIPT I-cache on CPU3 Sep 9 21:15:10.767281 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 21:15:10.767288 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 9 21:15:10.767295 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 21:15:10.767301 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 21:15:10.767308 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 21:15:10.767315 kernel: SMP: Total of 4 processors activated. Sep 9 21:15:10.767323 kernel: CPU: All CPU(s) started at EL1 Sep 9 21:15:10.767330 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 21:15:10.767337 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 21:15:10.767344 kernel: CPU features: detected: Common not Private translations Sep 9 21:15:10.767351 kernel: CPU features: detected: CRC32 instructions Sep 9 21:15:10.767358 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 21:15:10.767365 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 21:15:10.767372 kernel: CPU features: detected: LSE atomic instructions Sep 9 21:15:10.767379 kernel: CPU features: detected: Privileged Access Never Sep 9 21:15:10.767387 kernel: CPU features: detected: RAS Extension Support Sep 9 21:15:10.767394 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 21:15:10.767401 kernel: alternatives: applying system-wide alternatives Sep 9 21:15:10.767408 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 9 21:15:10.767415 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 9 21:15:10.767422 kernel: devtmpfs: initialized Sep 9 21:15:10.767429 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 21:15:10.767436 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 21:15:10.767442 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 21:15:10.767451 kernel: 0 pages in range for non-PLT usage Sep 9 21:15:10.767457 kernel: 508560 pages in range for PLT usage Sep 9 21:15:10.767464 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 21:15:10.767471 kernel: SMBIOS 3.0.0 present. Sep 9 21:15:10.767478 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 9 21:15:10.767484 kernel: DMI: Memory slots populated: 1/1 Sep 9 21:15:10.767491 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 21:15:10.767498 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 21:15:10.767505 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 21:15:10.767513 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 21:15:10.767520 kernel: audit: initializing netlink subsys (disabled) Sep 9 21:15:10.767527 kernel: audit: type=2000 audit(0.027:1): state=initialized audit_enabled=0 res=1 Sep 9 21:15:10.767534 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 21:15:10.767541 kernel: cpuidle: using governor menu Sep 9 21:15:10.767553 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 21:15:10.767561 kernel: ASID allocator initialised with 32768 entries Sep 9 21:15:10.767574 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 21:15:10.767581 kernel: Serial: AMBA PL011 UART driver Sep 9 21:15:10.767590 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 21:15:10.767597 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 21:15:10.767604 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 21:15:10.767611 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 21:15:10.767618 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 21:15:10.767625 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 21:15:10.767632 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 21:15:10.767638 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 21:15:10.767645 kernel: ACPI: Added _OSI(Module Device) Sep 9 21:15:10.767653 kernel: ACPI: Added _OSI(Processor Device) Sep 9 21:15:10.767660 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 21:15:10.767667 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 21:15:10.767674 kernel: ACPI: Interpreter enabled Sep 9 21:15:10.767681 kernel: ACPI: Using GIC for interrupt routing Sep 9 21:15:10.767688 kernel: ACPI: MCFG table detected, 1 entries Sep 9 21:15:10.767695 kernel: ACPI: CPU0 has been hot-added Sep 9 21:15:10.767701 kernel: ACPI: CPU1 has been hot-added Sep 9 21:15:10.767708 kernel: ACPI: CPU2 has been hot-added Sep 9 21:15:10.767715 kernel: ACPI: CPU3 has been hot-added Sep 9 21:15:10.767723 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 21:15:10.767730 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 21:15:10.767737 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 21:15:10.767868 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 21:15:10.767931 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 21:15:10.767988 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 21:15:10.768044 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 21:15:10.768101 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 21:15:10.768110 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 21:15:10.768117 kernel: PCI host bridge to bus 0000:00 Sep 9 21:15:10.768181 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 21:15:10.768234 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 21:15:10.768285 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 21:15:10.768336 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 21:15:10.768413 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 9 21:15:10.768481 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 21:15:10.768541 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 9 21:15:10.768635 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 9 21:15:10.768696 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 21:15:10.768753 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 9 21:15:10.768811 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 9 21:15:10.768873 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 9 21:15:10.768927 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 21:15:10.768978 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 21:15:10.769030 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 21:15:10.769039 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 21:15:10.769046 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 21:15:10.769053 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 21:15:10.769062 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 21:15:10.769069 kernel: iommu: Default domain type: Translated Sep 9 21:15:10.769076 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 21:15:10.769082 kernel: efivars: Registered efivars operations Sep 9 21:15:10.769090 kernel: vgaarb: loaded Sep 9 21:15:10.769097 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 21:15:10.769104 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 21:15:10.769111 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 21:15:10.769118 kernel: pnp: PnP ACPI init Sep 9 21:15:10.769192 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 21:15:10.769201 kernel: pnp: PnP ACPI: found 1 devices Sep 9 21:15:10.769209 kernel: NET: Registered PF_INET protocol family Sep 9 21:15:10.769216 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 21:15:10.769223 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 21:15:10.769230 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 21:15:10.769238 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 21:15:10.769245 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 21:15:10.769253 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 21:15:10.769261 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 21:15:10.769268 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 21:15:10.769275 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 21:15:10.769282 kernel: PCI: CLS 0 bytes, default 64 Sep 9 21:15:10.769289 kernel: kvm [1]: HYP mode not available Sep 9 21:15:10.769296 kernel: Initialise system trusted keyrings Sep 9 21:15:10.769303 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 21:15:10.769310 kernel: Key type asymmetric registered Sep 9 21:15:10.769318 kernel: Asymmetric key parser 'x509' registered Sep 9 21:15:10.769325 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 21:15:10.769332 kernel: io scheduler mq-deadline registered Sep 9 21:15:10.769338 kernel: io scheduler kyber registered Sep 9 21:15:10.769345 kernel: io scheduler bfq registered Sep 9 21:15:10.769352 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 21:15:10.769359 kernel: ACPI: button: Power Button [PWRB] Sep 9 21:15:10.769366 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 21:15:10.769423 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 21:15:10.769434 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 21:15:10.769441 kernel: thunder_xcv, ver 1.0 Sep 9 21:15:10.769447 kernel: thunder_bgx, ver 1.0 Sep 9 21:15:10.769454 kernel: nicpf, ver 1.0 Sep 9 21:15:10.769461 kernel: nicvf, ver 1.0 Sep 9 21:15:10.769527 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 21:15:10.769612 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T21:15:10 UTC (1757452510) Sep 9 21:15:10.769623 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 21:15:10.769630 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 21:15:10.769640 kernel: watchdog: NMI not fully supported Sep 9 21:15:10.769647 kernel: watchdog: Hard watchdog permanently disabled Sep 9 21:15:10.769654 kernel: NET: Registered PF_INET6 protocol family Sep 9 21:15:10.769661 kernel: Segment Routing with IPv6 Sep 9 21:15:10.769668 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 21:15:10.769674 kernel: NET: Registered PF_PACKET protocol family Sep 9 21:15:10.769681 kernel: Key type dns_resolver registered Sep 9 21:15:10.769688 kernel: registered taskstats version 1 Sep 9 21:15:10.769695 kernel: Loading compiled-in X.509 certificates Sep 9 21:15:10.769704 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: f5007e8dd2a6cc57a1fe19052a0aaf9985861c4d' Sep 9 21:15:10.769711 kernel: Demotion targets for Node 0: null Sep 9 21:15:10.769717 kernel: Key type .fscrypt registered Sep 9 21:15:10.769724 kernel: Key type fscrypt-provisioning registered Sep 9 21:15:10.769731 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 21:15:10.769738 kernel: ima: Allocated hash algorithm: sha1 Sep 9 21:15:10.769745 kernel: ima: No architecture policies found Sep 9 21:15:10.769752 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 21:15:10.769760 kernel: clk: Disabling unused clocks Sep 9 21:15:10.769768 kernel: PM: genpd: Disabling unused power domains Sep 9 21:15:10.769774 kernel: Warning: unable to open an initial console. Sep 9 21:15:10.769782 kernel: Freeing unused kernel memory: 38976K Sep 9 21:15:10.769788 kernel: Run /init as init process Sep 9 21:15:10.769795 kernel: with arguments: Sep 9 21:15:10.769802 kernel: /init Sep 9 21:15:10.769809 kernel: with environment: Sep 9 21:15:10.769815 kernel: HOME=/ Sep 9 21:15:10.769822 kernel: TERM=linux Sep 9 21:15:10.769830 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 21:15:10.769838 systemd[1]: Successfully made /usr/ read-only. Sep 9 21:15:10.769848 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 21:15:10.769856 systemd[1]: Detected virtualization kvm. Sep 9 21:15:10.769863 systemd[1]: Detected architecture arm64. Sep 9 21:15:10.769871 systemd[1]: Running in initrd. Sep 9 21:15:10.769878 systemd[1]: No hostname configured, using default hostname. Sep 9 21:15:10.769887 systemd[1]: Hostname set to . Sep 9 21:15:10.769894 systemd[1]: Initializing machine ID from VM UUID. Sep 9 21:15:10.769902 systemd[1]: Queued start job for default target initrd.target. Sep 9 21:15:10.769909 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:15:10.769916 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:15:10.769924 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 21:15:10.769932 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 21:15:10.769940 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 21:15:10.769950 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 21:15:10.769958 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 21:15:10.769966 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 21:15:10.769974 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:15:10.769981 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:15:10.769988 systemd[1]: Reached target paths.target - Path Units. Sep 9 21:15:10.769996 systemd[1]: Reached target slices.target - Slice Units. Sep 9 21:15:10.770005 systemd[1]: Reached target swap.target - Swaps. Sep 9 21:15:10.770012 systemd[1]: Reached target timers.target - Timer Units. Sep 9 21:15:10.770019 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 21:15:10.770027 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 21:15:10.770034 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 21:15:10.770042 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 21:15:10.770050 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:15:10.770057 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 21:15:10.770066 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:15:10.770073 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 21:15:10.770081 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 21:15:10.770088 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 21:15:10.770096 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 21:15:10.770104 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 21:15:10.770111 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 21:15:10.770119 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 21:15:10.770126 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 21:15:10.770135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:15:10.770142 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 21:15:10.770150 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:15:10.770158 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 21:15:10.770167 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 21:15:10.770190 systemd-journald[245]: Collecting audit messages is disabled. Sep 9 21:15:10.770209 systemd-journald[245]: Journal started Sep 9 21:15:10.770228 systemd-journald[245]: Runtime Journal (/run/log/journal/72486dac949145de9061c7c50c7d250d) is 6M, max 48.5M, 42.4M free. Sep 9 21:15:10.779672 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 21:15:10.779705 kernel: Bridge firewalling registered Sep 9 21:15:10.762108 systemd-modules-load[246]: Inserted module 'overlay' Sep 9 21:15:10.776641 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 9 21:15:10.783728 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:15:10.783748 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 21:15:10.785609 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 21:15:10.786675 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 21:15:10.792222 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 21:15:10.793823 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:15:10.795487 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 21:15:10.803690 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 21:15:10.806025 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:15:10.807193 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:15:10.812041 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 21:15:10.812622 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 21:15:10.813812 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 21:15:10.815415 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:15:10.827628 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 21:15:10.837891 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=f5bd02e888bbcae51800cf37660dcdbf356eb05540a834019d706c2521a92d30 Sep 9 21:15:10.868436 systemd-resolved[288]: Positive Trust Anchors: Sep 9 21:15:10.868458 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 21:15:10.868489 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 21:15:10.873201 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 9 21:15:10.874154 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 21:15:10.876584 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:15:10.909589 kernel: SCSI subsystem initialized Sep 9 21:15:10.913578 kernel: Loading iSCSI transport class v2.0-870. Sep 9 21:15:10.921589 kernel: iscsi: registered transport (tcp) Sep 9 21:15:10.933594 kernel: iscsi: registered transport (qla4xxx) Sep 9 21:15:10.933614 kernel: QLogic iSCSI HBA Driver Sep 9 21:15:10.949980 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 21:15:10.966648 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:15:10.969535 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 21:15:11.013519 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 21:15:11.015220 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 21:15:11.079596 kernel: raid6: neonx8 gen() 15758 MB/s Sep 9 21:15:11.096587 kernel: raid6: neonx4 gen() 15805 MB/s Sep 9 21:15:11.113583 kernel: raid6: neonx2 gen() 13192 MB/s Sep 9 21:15:11.130585 kernel: raid6: neonx1 gen() 10463 MB/s Sep 9 21:15:11.147589 kernel: raid6: int64x8 gen() 6897 MB/s Sep 9 21:15:11.164588 kernel: raid6: int64x4 gen() 7356 MB/s Sep 9 21:15:11.181584 kernel: raid6: int64x2 gen() 6079 MB/s Sep 9 21:15:11.198585 kernel: raid6: int64x1 gen() 5052 MB/s Sep 9 21:15:11.198601 kernel: raid6: using algorithm neonx4 gen() 15805 MB/s Sep 9 21:15:11.215593 kernel: raid6: .... xor() 12319 MB/s, rmw enabled Sep 9 21:15:11.215611 kernel: raid6: using neon recovery algorithm Sep 9 21:15:11.220801 kernel: xor: measuring software checksum speed Sep 9 21:15:11.220827 kernel: 8regs : 21590 MB/sec Sep 9 21:15:11.221857 kernel: 32regs : 21693 MB/sec Sep 9 21:15:11.221870 kernel: arm64_neon : 28128 MB/sec Sep 9 21:15:11.221879 kernel: xor: using function: arm64_neon (28128 MB/sec) Sep 9 21:15:11.275589 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 21:15:11.282279 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 21:15:11.287628 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:15:11.322158 systemd-udevd[497]: Using default interface naming scheme 'v255'. Sep 9 21:15:11.326402 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:15:11.329180 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 21:15:11.356783 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Sep 9 21:15:11.378981 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 21:15:11.381129 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 21:15:11.438615 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:15:11.441857 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 21:15:11.486626 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 21:15:11.489421 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 21:15:11.495839 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 21:15:11.495873 kernel: GPT:9289727 != 19775487 Sep 9 21:15:11.495883 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 21:15:11.496175 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:15:11.497004 kernel: GPT:9289727 != 19775487 Sep 9 21:15:11.496301 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:15:11.499051 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 21:15:11.499062 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:15:11.501142 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:15:11.501223 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:15:11.523925 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 21:15:11.532202 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 21:15:11.534097 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:15:11.535183 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 21:15:11.551220 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 21:15:11.552218 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 21:15:11.560643 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 21:15:11.561541 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 21:15:11.563337 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:15:11.564991 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 21:15:11.567143 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 21:15:11.568690 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 21:15:11.587261 disk-uuid[592]: Primary Header is updated. Sep 9 21:15:11.587261 disk-uuid[592]: Secondary Entries is updated. Sep 9 21:15:11.587261 disk-uuid[592]: Secondary Header is updated. Sep 9 21:15:11.590598 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:15:11.595273 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 21:15:12.596809 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 21:15:12.597506 disk-uuid[595]: The operation has completed successfully. Sep 9 21:15:12.619994 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 21:15:12.620121 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 21:15:12.645978 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 21:15:12.669587 sh[611]: Success Sep 9 21:15:12.682194 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 21:15:12.682233 kernel: device-mapper: uevent: version 1.0.3 Sep 9 21:15:12.682246 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 21:15:12.688596 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 21:15:12.712071 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 21:15:12.714524 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 21:15:12.726594 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 21:15:12.729674 kernel: BTRFS: device fsid 0420e954-c3c6-4e24-9a07-863b2151b564 devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (623) Sep 9 21:15:12.731769 kernel: BTRFS info (device dm-0): first mount of filesystem 0420e954-c3c6-4e24-9a07-863b2151b564 Sep 9 21:15:12.731789 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 21:15:12.735742 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 21:15:12.735773 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 21:15:12.736722 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 21:15:12.737745 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 21:15:12.738857 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 21:15:12.739550 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 21:15:12.742294 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 21:15:12.759602 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (652) Sep 9 21:15:12.761224 kernel: BTRFS info (device vda6): first mount of filesystem 65698167-02fe-46cf-95a3-7944ec314f1c Sep 9 21:15:12.761256 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 21:15:12.763630 kernel: BTRFS info (device vda6): turning on async discard Sep 9 21:15:12.763669 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 21:15:12.767606 kernel: BTRFS info (device vda6): last unmount of filesystem 65698167-02fe-46cf-95a3-7944ec314f1c Sep 9 21:15:12.769219 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 21:15:12.770861 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 21:15:12.838308 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 21:15:12.841817 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 21:15:12.875757 systemd-networkd[804]: lo: Link UP Sep 9 21:15:12.875771 systemd-networkd[804]: lo: Gained carrier Sep 9 21:15:12.876455 ignition[695]: Ignition 2.22.0 Sep 9 21:15:12.876514 systemd-networkd[804]: Enumeration completed Sep 9 21:15:12.876461 ignition[695]: Stage: fetch-offline Sep 9 21:15:12.876947 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:15:12.876489 ignition[695]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:15:12.876950 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 21:15:12.876496 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:15:12.877134 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 21:15:12.876607 ignition[695]: parsed url from cmdline: "" Sep 9 21:15:12.877492 systemd-networkd[804]: eth0: Link UP Sep 9 21:15:12.876611 ignition[695]: no config URL provided Sep 9 21:15:12.877925 systemd-networkd[804]: eth0: Gained carrier Sep 9 21:15:12.876616 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 21:15:12.877935 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:15:12.876622 ignition[695]: no config at "/usr/lib/ignition/user.ign" Sep 9 21:15:12.878477 systemd[1]: Reached target network.target - Network. Sep 9 21:15:12.876640 ignition[695]: op(1): [started] loading QEMU firmware config module Sep 9 21:15:12.892605 systemd-networkd[804]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 21:15:12.876644 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 21:15:12.883531 ignition[695]: op(1): [finished] loading QEMU firmware config module Sep 9 21:15:12.931107 ignition[695]: parsing config with SHA512: 9d872d636b18234ad8981a7b7ff53ca0639940f6608607e457585ce5c12ef774f7ee369a94753b4da8cd0a1217d6abf25cc688b1913d74d349ddfc3d1ad203c0 Sep 9 21:15:12.935083 unknown[695]: fetched base config from "system" Sep 9 21:15:12.935095 unknown[695]: fetched user config from "qemu" Sep 9 21:15:12.936106 ignition[695]: fetch-offline: fetch-offline passed Sep 9 21:15:12.936264 ignition[695]: Ignition finished successfully Sep 9 21:15:12.938894 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 21:15:12.940481 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 21:15:12.941226 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 21:15:12.981938 ignition[814]: Ignition 2.22.0 Sep 9 21:15:12.981954 ignition[814]: Stage: kargs Sep 9 21:15:12.982077 ignition[814]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:15:12.982086 ignition[814]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:15:12.983004 ignition[814]: kargs: kargs passed Sep 9 21:15:12.983047 ignition[814]: Ignition finished successfully Sep 9 21:15:12.985496 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 21:15:12.987964 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 21:15:13.025215 ignition[822]: Ignition 2.22.0 Sep 9 21:15:13.025232 ignition[822]: Stage: disks Sep 9 21:15:13.025356 ignition[822]: no configs at "/usr/lib/ignition/base.d" Sep 9 21:15:13.025365 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:15:13.026111 ignition[822]: disks: disks passed Sep 9 21:15:13.028285 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 21:15:13.026155 ignition[822]: Ignition finished successfully Sep 9 21:15:13.029874 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 21:15:13.031020 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 21:15:13.032403 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 21:15:13.033705 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 21:15:13.035115 systemd[1]: Reached target basic.target - Basic System. Sep 9 21:15:13.037364 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 21:15:13.068202 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 21:15:13.072523 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 21:15:13.076635 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 21:15:13.133589 kernel: EXT4-fs (vda9): mounted filesystem 09d5f77d-9531-4ec2-9062-5fa777d03891 r/w with ordered data mode. Quota mode: none. Sep 9 21:15:13.134302 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 21:15:13.135381 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 21:15:13.137415 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 21:15:13.138928 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 21:15:13.139735 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 21:15:13.139775 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 21:15:13.139797 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 21:15:13.148986 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 21:15:13.151361 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 21:15:13.155483 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (841) Sep 9 21:15:13.155508 kernel: BTRFS info (device vda6): first mount of filesystem 65698167-02fe-46cf-95a3-7944ec314f1c Sep 9 21:15:13.155544 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 21:15:13.157227 kernel: BTRFS info (device vda6): turning on async discard Sep 9 21:15:13.157263 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 21:15:13.158307 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 21:15:13.184623 initrd-setup-root[865]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 21:15:13.188621 initrd-setup-root[872]: cut: /sysroot/etc/group: No such file or directory Sep 9 21:15:13.192337 initrd-setup-root[879]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 21:15:13.196092 initrd-setup-root[886]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 21:15:13.260962 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 21:15:13.262967 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 21:15:13.264322 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 21:15:13.286667 kernel: BTRFS info (device vda6): last unmount of filesystem 65698167-02fe-46cf-95a3-7944ec314f1c Sep 9 21:15:13.300696 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 21:15:13.315907 ignition[954]: INFO : Ignition 2.22.0 Sep 9 21:15:13.315907 ignition[954]: INFO : Stage: mount Sep 9 21:15:13.317158 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:15:13.317158 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:15:13.317158 ignition[954]: INFO : mount: mount passed Sep 9 21:15:13.317158 ignition[954]: INFO : Ignition finished successfully Sep 9 21:15:13.318975 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 21:15:13.320811 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 21:15:13.854635 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 21:15:13.856155 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 21:15:13.871012 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (968) Sep 9 21:15:13.871050 kernel: BTRFS info (device vda6): first mount of filesystem 65698167-02fe-46cf-95a3-7944ec314f1c Sep 9 21:15:13.871069 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 21:15:13.873995 kernel: BTRFS info (device vda6): turning on async discard Sep 9 21:15:13.874014 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 21:15:13.875272 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 21:15:13.905038 ignition[985]: INFO : Ignition 2.22.0 Sep 9 21:15:13.905038 ignition[985]: INFO : Stage: files Sep 9 21:15:13.906309 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:15:13.906309 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:15:13.906309 ignition[985]: DEBUG : files: compiled without relabeling support, skipping Sep 9 21:15:13.909261 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 21:15:13.909261 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 21:15:13.909261 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 21:15:13.909261 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 21:15:13.909261 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 21:15:13.908927 unknown[985]: wrote ssh authorized keys file for user: core Sep 9 21:15:13.915233 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 21:15:13.915233 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 9 21:15:14.029528 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 21:15:14.335706 systemd-networkd[804]: eth0: Gained IPv6LL Sep 9 21:15:14.524719 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 9 21:15:14.524719 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 21:15:14.527678 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 21:15:14.820093 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 21:15:15.195426 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 21:15:15.195426 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 21:15:15.195426 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 21:15:15.195426 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 21:15:15.195426 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 21:15:15.195426 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 21:15:15.204173 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 21:15:15.204173 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 21:15:15.204173 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 21:15:15.204173 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 21:15:15.204173 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 21:15:15.204173 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 21:15:15.204173 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 21:15:15.204173 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 21:15:15.204173 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 9 21:15:15.487170 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 21:15:15.736836 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 9 21:15:15.736836 ignition[985]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 21:15:15.739874 ignition[985]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 21:15:15.741311 ignition[985]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 21:15:15.741311 ignition[985]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 21:15:15.741311 ignition[985]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 21:15:15.741311 ignition[985]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 21:15:15.741311 ignition[985]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 21:15:15.741311 ignition[985]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 21:15:15.741311 ignition[985]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 21:15:15.757261 ignition[985]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 21:15:15.760370 ignition[985]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 21:15:15.762714 ignition[985]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 21:15:15.762714 ignition[985]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 21:15:15.762714 ignition[985]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 21:15:15.762714 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 21:15:15.762714 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 21:15:15.762714 ignition[985]: INFO : files: files passed Sep 9 21:15:15.762714 ignition[985]: INFO : Ignition finished successfully Sep 9 21:15:15.763350 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 21:15:15.765884 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 21:15:15.768717 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 21:15:15.782751 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 21:15:15.783806 initrd-setup-root-after-ignition[1014]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 21:15:15.784690 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 21:15:15.787428 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:15:15.787428 initrd-setup-root-after-ignition[1016]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:15:15.789872 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 21:15:15.789812 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 21:15:15.790882 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 21:15:15.793018 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 21:15:15.836368 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 21:15:15.836469 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 21:15:15.838305 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 21:15:15.839647 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 21:15:15.841140 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 21:15:15.841833 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 21:15:15.855034 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 21:15:15.857064 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 21:15:15.872862 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:15:15.873774 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:15:15.875324 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 21:15:15.876766 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 21:15:15.876871 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 21:15:15.878856 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 21:15:15.880359 systemd[1]: Stopped target basic.target - Basic System. Sep 9 21:15:15.881827 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 21:15:15.883150 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 21:15:15.884584 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 21:15:15.886329 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 21:15:15.887794 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 21:15:15.889235 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 21:15:15.890706 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 21:15:15.892196 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 21:15:15.893460 systemd[1]: Stopped target swap.target - Swaps. Sep 9 21:15:15.894795 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 21:15:15.894907 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 21:15:15.896742 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:15:15.898198 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:15:15.899669 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 21:15:15.901263 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:15:15.902254 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 21:15:15.902358 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 21:15:15.904583 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 21:15:15.904699 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 21:15:15.906343 systemd[1]: Stopped target paths.target - Path Units. Sep 9 21:15:15.907491 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 21:15:15.907595 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:15:15.909305 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 21:15:15.910611 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 21:15:15.912024 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 21:15:15.912101 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 21:15:15.913792 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 21:15:15.913870 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 21:15:15.915040 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 21:15:15.915143 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 21:15:15.916542 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 21:15:15.916652 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 21:15:15.918624 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 21:15:15.919680 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 21:15:15.919804 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:15:15.921956 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 21:15:15.923466 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 21:15:15.923610 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:15:15.925073 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 21:15:15.925169 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 21:15:15.929665 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 21:15:15.929738 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 21:15:15.937586 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 21:15:15.942349 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 21:15:15.943594 ignition[1040]: INFO : Ignition 2.22.0 Sep 9 21:15:15.943594 ignition[1040]: INFO : Stage: umount Sep 9 21:15:15.946257 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 21:15:15.946257 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 21:15:15.946257 ignition[1040]: INFO : umount: umount passed Sep 9 21:15:15.946257 ignition[1040]: INFO : Ignition finished successfully Sep 9 21:15:15.943611 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 21:15:15.946363 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 21:15:15.946459 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 21:15:15.948038 systemd[1]: Stopped target network.target - Network. Sep 9 21:15:15.949361 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 21:15:15.949415 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 21:15:15.950624 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 21:15:15.950662 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 21:15:15.952065 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 21:15:15.952115 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 21:15:15.953387 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 21:15:15.953422 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 21:15:15.954844 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 21:15:15.954890 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 21:15:15.956301 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 21:15:15.957452 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 21:15:15.963396 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 21:15:15.963484 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 21:15:15.967826 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 21:15:15.968005 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 21:15:15.968092 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 21:15:15.971356 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 21:15:15.972098 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 21:15:15.973351 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 21:15:15.973393 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:15:15.975859 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 21:15:15.977213 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 21:15:15.977261 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 21:15:15.978905 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 21:15:15.978938 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:15:15.981009 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 21:15:15.981046 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 21:15:15.982656 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 21:15:15.982693 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:15:15.984779 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:15:15.988212 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 21:15:15.988266 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 21:15:15.995987 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 21:15:16.004801 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:15:16.006025 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 21:15:16.006062 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 21:15:16.007578 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 21:15:16.007623 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:15:16.009287 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 21:15:16.009332 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 21:15:16.011476 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 21:15:16.011517 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 21:15:16.013553 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 21:15:16.013609 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 21:15:16.016620 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 21:15:16.018033 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 21:15:16.018087 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:15:16.020603 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 21:15:16.020643 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:15:16.023397 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 21:15:16.023438 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:15:16.025947 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 21:15:16.025991 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 21:15:16.026021 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 21:15:16.026283 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 21:15:16.026391 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 21:15:16.028858 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 21:15:16.028964 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 21:15:16.031120 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 21:15:16.033014 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 21:15:16.048211 systemd[1]: Switching root. Sep 9 21:15:16.074545 systemd-journald[245]: Journal stopped Sep 9 21:15:16.804130 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 9 21:15:16.804181 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 21:15:16.804196 kernel: SELinux: policy capability open_perms=1 Sep 9 21:15:16.804209 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 21:15:16.804222 kernel: SELinux: policy capability always_check_network=0 Sep 9 21:15:16.804231 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 21:15:16.804240 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 21:15:16.804249 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 21:15:16.804258 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 21:15:16.804267 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 21:15:16.804280 kernel: audit: type=1403 audit(1757452516.265:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 21:15:16.804291 systemd[1]: Successfully loaded SELinux policy in 54.795ms. Sep 9 21:15:16.804304 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.457ms. Sep 9 21:15:16.804315 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 21:15:16.804326 systemd[1]: Detected virtualization kvm. Sep 9 21:15:16.804336 systemd[1]: Detected architecture arm64. Sep 9 21:15:16.804347 systemd[1]: Detected first boot. Sep 9 21:15:16.804357 systemd[1]: Initializing machine ID from VM UUID. Sep 9 21:15:16.804368 zram_generator::config[1085]: No configuration found. Sep 9 21:15:16.804379 kernel: NET: Registered PF_VSOCK protocol family Sep 9 21:15:16.804388 systemd[1]: Populated /etc with preset unit settings. Sep 9 21:15:16.804399 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 21:15:16.804409 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 21:15:16.804419 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 21:15:16.804429 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 21:15:16.804442 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 21:15:16.804452 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 21:15:16.804463 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 21:15:16.804473 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 21:15:16.804485 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 21:15:16.804495 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 21:15:16.804505 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 21:15:16.804514 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 21:15:16.804537 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 21:15:16.804550 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 21:15:16.804574 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 21:15:16.804585 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 21:15:16.804595 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 21:15:16.804606 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 21:15:16.804615 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 21:15:16.804625 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 21:15:16.804635 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 21:15:16.804645 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 21:15:16.804657 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 21:15:16.804669 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 21:15:16.804679 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 21:15:16.804689 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 21:15:16.804699 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 21:15:16.804709 systemd[1]: Reached target slices.target - Slice Units. Sep 9 21:15:16.804718 systemd[1]: Reached target swap.target - Swaps. Sep 9 21:15:16.804728 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 21:15:16.804738 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 21:15:16.804749 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 21:15:16.804759 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 21:15:16.804770 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 21:15:16.804780 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 21:15:16.804790 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 21:15:16.804804 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 21:15:16.804815 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 21:15:16.804825 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 21:15:16.804835 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 21:15:16.804846 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 21:15:16.804856 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 21:15:16.804944 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 21:15:16.804960 systemd[1]: Reached target machines.target - Containers. Sep 9 21:15:16.804970 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 21:15:16.804980 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 21:15:16.804990 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 21:15:16.804999 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 21:15:16.805013 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:15:16.805023 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 21:15:16.805034 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:15:16.805043 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 21:15:16.805053 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:15:16.805063 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 21:15:16.805077 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 21:15:16.805087 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 21:15:16.805097 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 21:15:16.805109 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 21:15:16.805120 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:15:16.805130 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 21:15:16.805140 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 21:15:16.805150 kernel: loop: module loaded Sep 9 21:15:16.805160 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 21:15:16.805169 kernel: fuse: init (API version 7.41) Sep 9 21:15:16.805179 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 21:15:16.805189 kernel: ACPI: bus type drm_connector registered Sep 9 21:15:16.805202 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 21:15:16.805212 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 21:15:16.805222 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 21:15:16.805232 systemd[1]: Stopped verity-setup.service. Sep 9 21:15:16.805242 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 21:15:16.805253 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 21:15:16.805263 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 21:15:16.805273 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 21:15:16.805283 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 21:15:16.805293 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 21:15:16.805306 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 21:15:16.805343 systemd-journald[1152]: Collecting audit messages is disabled. Sep 9 21:15:16.805366 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 21:15:16.805376 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 21:15:16.805386 systemd-journald[1152]: Journal started Sep 9 21:15:16.805406 systemd-journald[1152]: Runtime Journal (/run/log/journal/72486dac949145de9061c7c50c7d250d) is 6M, max 48.5M, 42.4M free. Sep 9 21:15:16.607863 systemd[1]: Queued start job for default target multi-user.target. Sep 9 21:15:16.630519 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 21:15:16.630916 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 21:15:16.809196 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 21:15:16.808517 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:15:16.809381 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:15:16.810791 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 21:15:16.811893 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 21:15:16.812051 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 21:15:16.813105 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:15:16.813264 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:15:16.814619 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 21:15:16.814786 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 21:15:16.815803 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:15:16.815952 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:15:16.817265 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 21:15:16.819969 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 21:15:16.821344 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 21:15:16.822710 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 21:15:16.834498 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 21:15:16.836746 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 21:15:16.838490 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 21:15:16.839490 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 21:15:16.839535 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 21:15:16.841210 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 21:15:16.851301 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 21:15:16.852622 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:15:16.853795 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 21:15:16.855408 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 21:15:16.856552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 21:15:16.857884 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 21:15:16.858786 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 21:15:16.860736 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:15:16.864710 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 21:15:16.867712 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 21:15:16.868012 systemd-journald[1152]: Time spent on flushing to /var/log/journal/72486dac949145de9061c7c50c7d250d is 23.456ms for 892 entries. Sep 9 21:15:16.868012 systemd-journald[1152]: System Journal (/var/log/journal/72486dac949145de9061c7c50c7d250d) is 8M, max 195.6M, 187.6M free. Sep 9 21:15:16.910650 systemd-journald[1152]: Received client request to flush runtime journal. Sep 9 21:15:16.910751 kernel: loop0: detected capacity change from 0 to 119368 Sep 9 21:15:16.910813 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 21:15:16.910834 kernel: loop1: detected capacity change from 0 to 203944 Sep 9 21:15:16.871271 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 21:15:16.872745 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 21:15:16.873902 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 21:15:16.885130 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:15:16.887602 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 21:15:16.891255 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 21:15:16.895912 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 21:15:16.904739 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 21:15:16.908168 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 21:15:16.913874 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 21:15:16.923230 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 21:15:16.934882 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Sep 9 21:15:16.934904 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Sep 9 21:15:16.936591 kernel: loop2: detected capacity change from 0 to 100632 Sep 9 21:15:16.940495 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 21:15:16.964598 kernel: loop3: detected capacity change from 0 to 119368 Sep 9 21:15:16.975496 kernel: loop4: detected capacity change from 0 to 203944 Sep 9 21:15:16.979588 kernel: loop5: detected capacity change from 0 to 100632 Sep 9 21:15:16.983064 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 21:15:16.983430 (sd-merge)[1223]: Merged extensions into '/usr'. Sep 9 21:15:16.986823 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 21:15:16.986874 systemd[1]: Reloading... Sep 9 21:15:17.029590 zram_generator::config[1248]: No configuration found. Sep 9 21:15:17.125495 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 21:15:17.188202 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 21:15:17.188451 systemd[1]: Reloading finished in 201 ms. Sep 9 21:15:17.221051 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 21:15:17.222254 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 21:15:17.238803 systemd[1]: Starting ensure-sysext.service... Sep 9 21:15:17.240388 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 21:15:17.249529 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Sep 9 21:15:17.249547 systemd[1]: Reloading... Sep 9 21:15:17.253148 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 21:15:17.253179 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 21:15:17.253381 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 21:15:17.253600 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 21:15:17.254200 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 21:15:17.254401 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Sep 9 21:15:17.254449 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Sep 9 21:15:17.257129 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 21:15:17.257144 systemd-tmpfiles[1284]: Skipping /boot Sep 9 21:15:17.262762 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 21:15:17.262778 systemd-tmpfiles[1284]: Skipping /boot Sep 9 21:15:17.288829 zram_generator::config[1314]: No configuration found. Sep 9 21:15:17.415477 systemd[1]: Reloading finished in 165 ms. Sep 9 21:15:17.440082 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 21:15:17.445372 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 21:15:17.457588 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 21:15:17.459709 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 21:15:17.466874 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 21:15:17.471806 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 21:15:17.474619 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 21:15:17.477779 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 21:15:17.482643 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 21:15:17.489512 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:15:17.492904 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:15:17.497762 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:15:17.498716 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:15:17.498825 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:15:17.501593 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 21:15:17.503720 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:15:17.503893 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:15:17.505383 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:15:17.505544 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:15:17.514223 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 21:15:17.515871 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:15:17.516148 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:15:17.517640 systemd-udevd[1357]: Using default interface naming scheme 'v255'. Sep 9 21:15:17.518756 augenrules[1379]: No rules Sep 9 21:15:17.519955 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 21:15:17.520888 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 21:15:17.522194 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 21:15:17.528260 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 21:15:17.529207 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 21:15:17.530082 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 21:15:17.538882 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 21:15:17.541698 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 21:15:17.543437 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 21:15:17.545439 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 21:15:17.545490 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 21:15:17.546768 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 21:15:17.550727 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 21:15:17.551493 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 21:15:17.551812 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 21:15:17.556593 systemd[1]: Finished ensure-sysext.service. Sep 9 21:15:17.557932 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 21:15:17.558078 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 21:15:17.558169 augenrules[1387]: /sbin/augenrules: No change Sep 9 21:15:17.562812 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 21:15:17.563844 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 21:15:17.568717 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 21:15:17.568938 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 21:15:17.570218 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 21:15:17.570377 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 21:15:17.573742 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 21:15:17.574823 augenrules[1432]: No rules Sep 9 21:15:17.577213 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 21:15:17.577654 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 21:15:17.583027 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 21:15:17.584057 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 21:15:17.584122 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 21:15:17.585683 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 21:15:17.626759 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 21:15:17.642175 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 21:15:17.678440 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 21:15:17.683292 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 21:15:17.709715 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 21:15:17.719683 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 21:15:17.720310 systemd-resolved[1350]: Positive Trust Anchors: Sep 9 21:15:17.720330 systemd-resolved[1350]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 21:15:17.720361 systemd-resolved[1350]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 21:15:17.720822 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 21:15:17.722145 systemd-networkd[1449]: lo: Link UP Sep 9 21:15:17.722421 systemd-networkd[1449]: lo: Gained carrier Sep 9 21:15:17.723982 systemd-networkd[1449]: Enumeration completed Sep 9 21:15:17.724133 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 21:15:17.724708 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:15:17.724799 systemd-networkd[1449]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 21:15:17.725394 systemd-networkd[1449]: eth0: Link UP Sep 9 21:15:17.725795 systemd-networkd[1449]: eth0: Gained carrier Sep 9 21:15:17.725872 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 21:15:17.726461 systemd-resolved[1350]: Defaulting to hostname 'linux'. Sep 9 21:15:17.726692 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 21:15:17.728843 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 21:15:17.729909 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 21:15:17.731029 systemd[1]: Reached target network.target - Network. Sep 9 21:15:17.731940 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 21:15:17.732969 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 21:15:17.734457 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 21:15:17.735547 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 21:15:17.736600 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 21:15:17.737511 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 21:15:17.738663 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 21:15:17.739553 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 21:15:17.739592 systemd[1]: Reached target paths.target - Path Units. Sep 9 21:15:17.739687 systemd-networkd[1449]: eth0: DHCPv4 address 10.0.0.61/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 21:15:17.740643 systemd[1]: Reached target timers.target - Timer Units. Sep 9 21:15:17.741127 systemd-timesyncd[1450]: Network configuration changed, trying to establish connection. Sep 9 21:15:17.742109 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 21:15:17.744134 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 21:15:17.746365 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 21:15:17.747713 systemd-timesyncd[1450]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 21:15:17.747787 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 21:15:17.748113 systemd-timesyncd[1450]: Initial clock synchronization to Tue 2025-09-09 21:15:17.504331 UTC. Sep 9 21:15:17.748701 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 21:15:17.751957 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 21:15:17.753092 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 21:15:17.755642 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 21:15:17.756716 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 21:15:17.758152 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 21:15:17.761005 systemd[1]: Reached target basic.target - Basic System. Sep 9 21:15:17.762143 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 21:15:17.762177 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 21:15:17.763126 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 21:15:17.764840 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 21:15:17.767758 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 21:15:17.777862 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 21:15:17.780284 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 21:15:17.782065 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 21:15:17.785123 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 21:15:17.787784 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 21:15:17.789344 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 21:15:17.792423 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 21:15:17.795737 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 21:15:17.797438 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 21:15:17.797865 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 21:15:17.798944 jq[1491]: false Sep 9 21:15:17.800016 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 21:15:17.805652 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 21:15:17.812084 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 21:15:17.813653 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 21:15:17.813836 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 21:15:17.817830 jq[1503]: true Sep 9 21:15:17.829451 jq[1518]: true Sep 9 21:15:17.833450 update_engine[1502]: I20250909 21:15:17.833212 1502 main.cc:92] Flatcar Update Engine starting Sep 9 21:15:17.840394 dbus-daemon[1478]: [system] SELinux support is enabled Sep 9 21:15:17.840581 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 21:15:17.842904 (ntainerd)[1524]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 21:15:17.844357 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 21:15:17.844830 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 21:15:17.847372 update_engine[1502]: I20250909 21:15:17.847064 1502 update_check_scheduler.cc:74] Next update check in 8m1s Sep 9 21:15:17.847920 extend-filesystems[1492]: Found /dev/vda6 Sep 9 21:15:17.851665 extend-filesystems[1492]: Found /dev/vda9 Sep 9 21:15:17.848885 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 21:15:17.849059 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 21:15:17.851678 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 21:15:17.851779 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 21:15:17.853838 extend-filesystems[1492]: Checking size of /dev/vda9 Sep 9 21:15:17.854256 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 21:15:17.856629 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 21:15:17.860598 tar[1509]: linux-arm64/helm Sep 9 21:15:17.860647 systemd[1]: Started update-engine.service - Update Engine. Sep 9 21:15:17.862145 extend-filesystems[1492]: Resized partition /dev/vda9 Sep 9 21:15:17.863898 extend-filesystems[1534]: resize2fs 1.47.3 (8-Jul-2025) Sep 9 21:15:17.864580 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 21:15:17.867495 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 21:15:17.873584 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 21:15:17.907644 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 21:15:17.929154 extend-filesystems[1534]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 21:15:17.929154 extend-filesystems[1534]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 21:15:17.929154 extend-filesystems[1534]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 21:15:17.932028 extend-filesystems[1492]: Resized filesystem in /dev/vda9 Sep 9 21:15:17.930449 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 21:15:17.932636 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 21:15:17.934576 bash[1550]: Updated "/home/core/.ssh/authorized_keys" Sep 9 21:15:17.959919 locksmithd[1537]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 21:15:17.981622 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 21:15:17.984668 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 21:15:18.000452 systemd-logind[1501]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 21:15:18.001052 systemd-logind[1501]: New seat seat0. Sep 9 21:15:18.002229 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 21:15:18.005160 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 21:15:18.045635 containerd[1524]: time="2025-09-09T21:15:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 21:15:18.046163 containerd[1524]: time="2025-09-09T21:15:18.046131657Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 21:15:18.058300 containerd[1524]: time="2025-09-09T21:15:18.057879346Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.929µs" Sep 9 21:15:18.058300 containerd[1524]: time="2025-09-09T21:15:18.057923793Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 21:15:18.058381 containerd[1524]: time="2025-09-09T21:15:18.058280495Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 21:15:18.058609 containerd[1524]: time="2025-09-09T21:15:18.058585380Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 21:15:18.058635 containerd[1524]: time="2025-09-09T21:15:18.058610745Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 21:15:18.058668 containerd[1524]: time="2025-09-09T21:15:18.058635645Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 21:15:18.058712 containerd[1524]: time="2025-09-09T21:15:18.058684552Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 21:15:18.058712 containerd[1524]: time="2025-09-09T21:15:18.058708327Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 21:15:18.058929 containerd[1524]: time="2025-09-09T21:15:18.058910279Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 21:15:18.058950 containerd[1524]: time="2025-09-09T21:15:18.058929477Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 21:15:18.058950 containerd[1524]: time="2025-09-09T21:15:18.058940143Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 21:15:18.058950 containerd[1524]: time="2025-09-09T21:15:18.058947434Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 21:15:18.059024 containerd[1524]: time="2025-09-09T21:15:18.059011429Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 21:15:18.059200 containerd[1524]: time="2025-09-09T21:15:18.059183826Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 21:15:18.059228 containerd[1524]: time="2025-09-09T21:15:18.059215358Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 21:15:18.059247 containerd[1524]: time="2025-09-09T21:15:18.059228739Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 21:15:18.059277 containerd[1524]: time="2025-09-09T21:15:18.059266088Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 21:15:18.059487 containerd[1524]: time="2025-09-09T21:15:18.059473275Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 21:15:18.059557 containerd[1524]: time="2025-09-09T21:15:18.059542894Z" level=info msg="metadata content store policy set" policy=shared Sep 9 21:15:18.065916 containerd[1524]: time="2025-09-09T21:15:18.065879438Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 21:15:18.066109 containerd[1524]: time="2025-09-09T21:15:18.066089379Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 21:15:18.066255 containerd[1524]: time="2025-09-09T21:15:18.066234976Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 21:15:18.067151 containerd[1524]: time="2025-09-09T21:15:18.066371614Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 21:15:18.067151 containerd[1524]: time="2025-09-09T21:15:18.066395854Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 21:15:18.067151 containerd[1524]: time="2025-09-09T21:15:18.066412919Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 21:15:18.067151 containerd[1524]: time="2025-09-09T21:15:18.066424089Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 21:15:18.067151 containerd[1524]: time="2025-09-09T21:15:18.066434910Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 21:15:18.067151 containerd[1524]: time="2025-09-09T21:15:18.066445033Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 21:15:18.067151 containerd[1524]: time="2025-09-09T21:15:18.066464968Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 21:15:18.067151 containerd[1524]: time="2025-09-09T21:15:18.066477069Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 21:15:18.067151 containerd[1524]: time="2025-09-09T21:15:18.066488860Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 21:15:18.067151 containerd[1524]: time="2025-09-09T21:15:18.066630113Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 21:15:18.067151 containerd[1524]: time="2025-09-09T21:15:18.066653073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 21:15:18.067151 containerd[1524]: time="2025-09-09T21:15:18.066674599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 21:15:18.067151 containerd[1524]: time="2025-09-09T21:15:18.066689531Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 21:15:18.067151 containerd[1524]: time="2025-09-09T21:15:18.066699421Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 21:15:18.067398 containerd[1524]: time="2025-09-09T21:15:18.066709583Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 21:15:18.067398 containerd[1524]: time="2025-09-09T21:15:18.066719899Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 21:15:18.067398 containerd[1524]: time="2025-09-09T21:15:18.066731535Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 21:15:18.067398 containerd[1524]: time="2025-09-09T21:15:18.066742472Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 21:15:18.067398 containerd[1524]: time="2025-09-09T21:15:18.066751974Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 21:15:18.067398 containerd[1524]: time="2025-09-09T21:15:18.066761399Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 21:15:18.067398 containerd[1524]: time="2025-09-09T21:15:18.066938528Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 21:15:18.067398 containerd[1524]: time="2025-09-09T21:15:18.066982005Z" level=info msg="Start snapshots syncer" Sep 9 21:15:18.067398 containerd[1524]: time="2025-09-09T21:15:18.067007370Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 21:15:18.067554 containerd[1524]: time="2025-09-09T21:15:18.067416548Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 21:15:18.067648 containerd[1524]: time="2025-09-09T21:15:18.067573004Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 21:15:18.067773 containerd[1524]: time="2025-09-09T21:15:18.067747613Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 21:15:18.067980 containerd[1524]: time="2025-09-09T21:15:18.067902324Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 21:15:18.068017 containerd[1524]: time="2025-09-09T21:15:18.067986293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 21:15:18.068054 containerd[1524]: time="2025-09-09T21:15:18.068036092Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 21:15:18.068077 containerd[1524]: time="2025-09-09T21:15:18.068059440Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 21:15:18.068077 containerd[1524]: time="2025-09-09T21:15:18.068072550Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 21:15:18.068116 containerd[1524]: time="2025-09-09T21:15:18.068082556Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 21:15:18.068157 containerd[1524]: time="2025-09-09T21:15:18.068093067Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 21:15:18.068188 containerd[1524]: time="2025-09-09T21:15:18.068175174Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 21:15:18.068244 containerd[1524]: time="2025-09-09T21:15:18.068230054Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 21:15:18.068263 containerd[1524]: time="2025-09-09T21:15:18.068250610Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 21:15:18.068349 containerd[1524]: time="2025-09-09T21:15:18.068290790Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 21:15:18.068378 containerd[1524]: time="2025-09-09T21:15:18.068350363Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 21:15:18.068378 containerd[1524]: time="2025-09-09T21:15:18.068359827Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 21:15:18.068378 containerd[1524]: time="2025-09-09T21:15:18.068369717Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 21:15:18.068378 containerd[1524]: time="2025-09-09T21:15:18.068377125Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 21:15:18.068452 containerd[1524]: time="2025-09-09T21:15:18.068386200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 21:15:18.068470 containerd[1524]: time="2025-09-09T21:15:18.068450117Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 21:15:18.068604 containerd[1524]: time="2025-09-09T21:15:18.068586406Z" level=info msg="runtime interface created" Sep 9 21:15:18.068631 containerd[1524]: time="2025-09-09T21:15:18.068600757Z" level=info msg="created NRI interface" Sep 9 21:15:18.068631 containerd[1524]: time="2025-09-09T21:15:18.068619955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 21:15:18.068663 containerd[1524]: time="2025-09-09T21:15:18.068631745Z" level=info msg="Connect containerd service" Sep 9 21:15:18.068736 containerd[1524]: time="2025-09-09T21:15:18.068716102Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 21:15:18.070146 containerd[1524]: time="2025-09-09T21:15:18.070063167Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 21:15:18.130904 containerd[1524]: time="2025-09-09T21:15:18.130856456Z" level=info msg="Start subscribing containerd event" Sep 9 21:15:18.131071 containerd[1524]: time="2025-09-09T21:15:18.131029086Z" level=info msg="Start recovering state" Sep 9 21:15:18.131163 containerd[1524]: time="2025-09-09T21:15:18.131133378Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 21:15:18.131195 containerd[1524]: time="2025-09-09T21:15:18.131183992Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 21:15:18.131442 containerd[1524]: time="2025-09-09T21:15:18.131352937Z" level=info msg="Start event monitor" Sep 9 21:15:18.131442 containerd[1524]: time="2025-09-09T21:15:18.131382840Z" level=info msg="Start cni network conf syncer for default" Sep 9 21:15:18.131442 containerd[1524]: time="2025-09-09T21:15:18.131391877Z" level=info msg="Start streaming server" Sep 9 21:15:18.131442 containerd[1524]: time="2025-09-09T21:15:18.131402349Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 21:15:18.131442 containerd[1524]: time="2025-09-09T21:15:18.131408748Z" level=info msg="runtime interface starting up..." Sep 9 21:15:18.131442 containerd[1524]: time="2025-09-09T21:15:18.131422323Z" level=info msg="starting plugins..." Sep 9 21:15:18.131625 containerd[1524]: time="2025-09-09T21:15:18.131611669Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 21:15:18.131894 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 21:15:18.132078 containerd[1524]: time="2025-09-09T21:15:18.131818119Z" level=info msg="containerd successfully booted in 0.086528s" Sep 9 21:15:18.174244 tar[1509]: linux-arm64/LICENSE Sep 9 21:15:18.174358 tar[1509]: linux-arm64/README.md Sep 9 21:15:18.190427 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 21:15:19.152722 sshd_keygen[1520]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 21:15:19.173606 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 21:15:19.176218 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 21:15:19.202106 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 21:15:19.203626 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 21:15:19.205855 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 21:15:19.230500 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 21:15:19.235021 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 21:15:19.236882 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 21:15:19.238113 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 21:15:19.391683 systemd-networkd[1449]: eth0: Gained IPv6LL Sep 9 21:15:19.394664 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 21:15:19.396045 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 21:15:19.398155 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 21:15:19.400172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:15:19.401999 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 21:15:19.422434 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 21:15:19.422670 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 21:15:19.424078 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 21:15:19.425798 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 21:15:19.929469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:15:19.930852 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 21:15:19.931832 systemd[1]: Startup finished in 2.004s (kernel) + 5.661s (initrd) + 3.721s (userspace) = 11.388s. Sep 9 21:15:19.932970 (kubelet)[1628]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:15:20.282717 kubelet[1628]: E0909 21:15:20.282612 1628 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:15:20.285241 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:15:20.285379 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:15:20.286619 systemd[1]: kubelet.service: Consumed 765ms CPU time, 257.1M memory peak. Sep 9 21:15:22.991943 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 21:15:22.993052 systemd[1]: Started sshd@0-10.0.0.61:22-10.0.0.1:49100.service - OpenSSH per-connection server daemon (10.0.0.1:49100). Sep 9 21:15:23.049982 sshd[1641]: Accepted publickey for core from 10.0.0.1 port 49100 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:15:23.051661 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:15:23.057279 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 21:15:23.058169 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 21:15:23.064509 systemd-logind[1501]: New session 1 of user core. Sep 9 21:15:23.079635 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 21:15:23.082647 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 21:15:23.094537 (systemd)[1646]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 21:15:23.097031 systemd-logind[1501]: New session c1 of user core. Sep 9 21:15:23.199865 systemd[1646]: Queued start job for default target default.target. Sep 9 21:15:23.221524 systemd[1646]: Created slice app.slice - User Application Slice. Sep 9 21:15:23.221581 systemd[1646]: Reached target paths.target - Paths. Sep 9 21:15:23.221624 systemd[1646]: Reached target timers.target - Timers. Sep 9 21:15:23.222711 systemd[1646]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 21:15:23.230951 systemd[1646]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 21:15:23.231008 systemd[1646]: Reached target sockets.target - Sockets. Sep 9 21:15:23.231044 systemd[1646]: Reached target basic.target - Basic System. Sep 9 21:15:23.231073 systemd[1646]: Reached target default.target - Main User Target. Sep 9 21:15:23.231095 systemd[1646]: Startup finished in 129ms. Sep 9 21:15:23.231193 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 21:15:23.232447 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 21:15:23.287273 systemd[1]: Started sshd@1-10.0.0.61:22-10.0.0.1:49102.service - OpenSSH per-connection server daemon (10.0.0.1:49102). Sep 9 21:15:23.339316 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 49102 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:15:23.340412 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:15:23.344019 systemd-logind[1501]: New session 2 of user core. Sep 9 21:15:23.353693 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 21:15:23.403633 sshd[1660]: Connection closed by 10.0.0.1 port 49102 Sep 9 21:15:23.404201 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Sep 9 21:15:23.415391 systemd[1]: sshd@1-10.0.0.61:22-10.0.0.1:49102.service: Deactivated successfully. Sep 9 21:15:23.417822 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 21:15:23.418453 systemd-logind[1501]: Session 2 logged out. Waiting for processes to exit. Sep 9 21:15:23.420476 systemd[1]: Started sshd@2-10.0.0.61:22-10.0.0.1:49118.service - OpenSSH per-connection server daemon (10.0.0.1:49118). Sep 9 21:15:23.421053 systemd-logind[1501]: Removed session 2. Sep 9 21:15:23.475412 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 49118 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:15:23.476442 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:15:23.480104 systemd-logind[1501]: New session 3 of user core. Sep 9 21:15:23.494731 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 21:15:23.541166 sshd[1669]: Connection closed by 10.0.0.1 port 49118 Sep 9 21:15:23.540982 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Sep 9 21:15:23.559336 systemd[1]: sshd@2-10.0.0.61:22-10.0.0.1:49118.service: Deactivated successfully. Sep 9 21:15:23.561770 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 21:15:23.562363 systemd-logind[1501]: Session 3 logged out. Waiting for processes to exit. Sep 9 21:15:23.564373 systemd[1]: Started sshd@3-10.0.0.61:22-10.0.0.1:49126.service - OpenSSH per-connection server daemon (10.0.0.1:49126). Sep 9 21:15:23.564978 systemd-logind[1501]: Removed session 3. Sep 9 21:15:23.624685 sshd[1675]: Accepted publickey for core from 10.0.0.1 port 49126 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:15:23.625806 sshd-session[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:15:23.630277 systemd-logind[1501]: New session 4 of user core. Sep 9 21:15:23.638727 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 21:15:23.689753 sshd[1678]: Connection closed by 10.0.0.1 port 49126 Sep 9 21:15:23.690322 sshd-session[1675]: pam_unix(sshd:session): session closed for user core Sep 9 21:15:23.703365 systemd[1]: sshd@3-10.0.0.61:22-10.0.0.1:49126.service: Deactivated successfully. Sep 9 21:15:23.705693 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 21:15:23.706304 systemd-logind[1501]: Session 4 logged out. Waiting for processes to exit. Sep 9 21:15:23.708308 systemd[1]: Started sshd@4-10.0.0.61:22-10.0.0.1:49130.service - OpenSSH per-connection server daemon (10.0.0.1:49130). Sep 9 21:15:23.709084 systemd-logind[1501]: Removed session 4. Sep 9 21:15:23.768558 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 49130 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:15:23.769623 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:15:23.773992 systemd-logind[1501]: New session 5 of user core. Sep 9 21:15:23.782717 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 21:15:23.838082 sudo[1688]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 21:15:23.838333 sudo[1688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:15:23.851621 sudo[1688]: pam_unix(sudo:session): session closed for user root Sep 9 21:15:23.852940 sshd[1687]: Connection closed by 10.0.0.1 port 49130 Sep 9 21:15:23.853426 sshd-session[1684]: pam_unix(sshd:session): session closed for user core Sep 9 21:15:23.862462 systemd[1]: sshd@4-10.0.0.61:22-10.0.0.1:49130.service: Deactivated successfully. Sep 9 21:15:23.864880 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 21:15:23.866723 systemd-logind[1501]: Session 5 logged out. Waiting for processes to exit. Sep 9 21:15:23.868935 systemd[1]: Started sshd@5-10.0.0.61:22-10.0.0.1:49142.service - OpenSSH per-connection server daemon (10.0.0.1:49142). Sep 9 21:15:23.869435 systemd-logind[1501]: Removed session 5. Sep 9 21:15:23.924104 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 49142 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:15:23.925368 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:15:23.929634 systemd-logind[1501]: New session 6 of user core. Sep 9 21:15:23.940716 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 21:15:23.989854 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 21:15:23.990114 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:15:24.035610 sudo[1699]: pam_unix(sudo:session): session closed for user root Sep 9 21:15:24.040330 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 21:15:24.040864 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:15:24.048942 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 21:15:24.079640 augenrules[1721]: No rules Sep 9 21:15:24.080686 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 21:15:24.080892 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 21:15:24.082011 sudo[1698]: pam_unix(sudo:session): session closed for user root Sep 9 21:15:24.083070 sshd[1697]: Connection closed by 10.0.0.1 port 49142 Sep 9 21:15:24.083384 sshd-session[1694]: pam_unix(sshd:session): session closed for user core Sep 9 21:15:24.095321 systemd[1]: sshd@5-10.0.0.61:22-10.0.0.1:49142.service: Deactivated successfully. Sep 9 21:15:24.096963 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 21:15:24.098032 systemd-logind[1501]: Session 6 logged out. Waiting for processes to exit. Sep 9 21:15:24.099685 systemd[1]: Started sshd@6-10.0.0.61:22-10.0.0.1:49146.service - OpenSSH per-connection server daemon (10.0.0.1:49146). Sep 9 21:15:24.100704 systemd-logind[1501]: Removed session 6. Sep 9 21:15:24.161603 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 49146 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:15:24.162606 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:15:24.166270 systemd-logind[1501]: New session 7 of user core. Sep 9 21:15:24.176785 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 21:15:24.227008 sudo[1734]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 21:15:24.227274 sudo[1734]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 21:15:24.486089 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 21:15:24.501983 (dockerd)[1754]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 21:15:24.695989 dockerd[1754]: time="2025-09-09T21:15:24.695929015Z" level=info msg="Starting up" Sep 9 21:15:24.696723 dockerd[1754]: time="2025-09-09T21:15:24.696703704Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 21:15:24.706594 dockerd[1754]: time="2025-09-09T21:15:24.706539124Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 21:15:24.829258 dockerd[1754]: time="2025-09-09T21:15:24.828899773Z" level=info msg="Loading containers: start." Sep 9 21:15:24.836593 kernel: Initializing XFRM netlink socket Sep 9 21:15:25.015680 systemd-networkd[1449]: docker0: Link UP Sep 9 21:15:25.018974 dockerd[1754]: time="2025-09-09T21:15:25.018932697Z" level=info msg="Loading containers: done." Sep 9 21:15:25.032873 dockerd[1754]: time="2025-09-09T21:15:25.032827693Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 9 21:15:25.033002 dockerd[1754]: time="2025-09-09T21:15:25.032917568Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 9 21:15:25.033002 dockerd[1754]: time="2025-09-09T21:15:25.032992740Z" level=info msg="Initializing buildkit" Sep 9 21:15:25.052830 dockerd[1754]: time="2025-09-09T21:15:25.052791796Z" level=info msg="Completed buildkit initialization" Sep 9 21:15:25.057214 dockerd[1754]: time="2025-09-09T21:15:25.057179607Z" level=info msg="Daemon has completed initialization" Sep 9 21:15:25.057334 dockerd[1754]: time="2025-09-09T21:15:25.057279955Z" level=info msg="API listen on /run/docker.sock" Sep 9 21:15:25.057376 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 9 21:15:25.546135 containerd[1524]: time="2025-09-09T21:15:25.545762734Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\"" Sep 9 21:15:26.097843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330733803.mount: Deactivated successfully. Sep 9 21:15:27.093270 containerd[1524]: time="2025-09-09T21:15:27.093206882Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:27.094479 containerd[1524]: time="2025-09-09T21:15:27.094434960Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.12: active requests=0, bytes read=25652443" Sep 9 21:15:27.095265 containerd[1524]: time="2025-09-09T21:15:27.095234944Z" level=info msg="ImageCreate event name:\"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:27.098651 containerd[1524]: time="2025-09-09T21:15:27.098606260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:27.099656 containerd[1524]: time="2025-09-09T21:15:27.099623878Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.12\" with image id \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e9011c3bee8c06ecabd7816e119dca4e448c92f7a78acd891de3d2db1dc6c234\", size \"25649241\" in 1.553815037s" Sep 9 21:15:27.099699 containerd[1524]: time="2025-09-09T21:15:27.099657726Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.12\" returns image reference \"sha256:25d00c9505e8a4a7a6c827030f878b50e58bbf63322e01a7d92807bcb4db6b3d\"" Sep 9 21:15:27.100942 containerd[1524]: time="2025-09-09T21:15:27.100887190Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\"" Sep 9 21:15:28.145346 containerd[1524]: time="2025-09-09T21:15:28.145286085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:28.146909 containerd[1524]: time="2025-09-09T21:15:28.146875913Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.12: active requests=0, bytes read=22460311" Sep 9 21:15:28.147876 containerd[1524]: time="2025-09-09T21:15:28.147844072Z" level=info msg="ImageCreate event name:\"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:28.150898 containerd[1524]: time="2025-09-09T21:15:28.150864850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:28.151815 containerd[1524]: time="2025-09-09T21:15:28.151781424Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.12\" with image id \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d2862f94d87320267fddbd55db26556a267aa802e51d6b60f25786b4c428afc8\", size \"23997423\" in 1.05085903s" Sep 9 21:15:28.151857 containerd[1524]: time="2025-09-09T21:15:28.151815747Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.12\" returns image reference \"sha256:04df324666956d4cb57096c0edff6bfe1d75e71fb8f508dec8818f2842f821e1\"" Sep 9 21:15:28.152290 containerd[1524]: time="2025-09-09T21:15:28.152269808Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\"" Sep 9 21:15:29.238823 containerd[1524]: time="2025-09-09T21:15:29.238044999Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:29.238823 containerd[1524]: time="2025-09-09T21:15:29.238815373Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.12: active requests=0, bytes read=17125905" Sep 9 21:15:29.239393 containerd[1524]: time="2025-09-09T21:15:29.239370701Z" level=info msg="ImageCreate event name:\"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:29.242490 containerd[1524]: time="2025-09-09T21:15:29.242460299Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:29.243496 containerd[1524]: time="2025-09-09T21:15:29.243471456Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.12\" with image id \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:152943b7e30244f4415fd0a5860a2dccd91660fe983d30a28a10edb0cc8f6756\", size \"18663035\" in 1.091171878s" Sep 9 21:15:29.243496 containerd[1524]: time="2025-09-09T21:15:29.243500015Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.12\" returns image reference \"sha256:00b0619122c2d4fd3b5e102e9850d8c732e08a386b9c172c409b3a5cd552e07d\"" Sep 9 21:15:29.244304 containerd[1524]: time="2025-09-09T21:15:29.244282106Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\"" Sep 9 21:15:30.122614 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount777022422.mount: Deactivated successfully. Sep 9 21:15:30.361946 containerd[1524]: time="2025-09-09T21:15:30.361903424Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:30.362597 containerd[1524]: time="2025-09-09T21:15:30.362573219Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.12: active requests=0, bytes read=26916097" Sep 9 21:15:30.363577 containerd[1524]: time="2025-09-09T21:15:30.363467671Z" level=info msg="ImageCreate event name:\"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:30.367296 containerd[1524]: time="2025-09-09T21:15:30.367249304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:30.368809 containerd[1524]: time="2025-09-09T21:15:30.368780754Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.12\" with image id \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\", repo tag \"registry.k8s.io/kube-proxy:v1.31.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:90aa6b5f4065937521ff8438bc705317485d0be3f8b00a07145e697d92cc2cc6\", size \"26915114\" in 1.124464755s" Sep 9 21:15:30.368868 containerd[1524]: time="2025-09-09T21:15:30.368813512Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.12\" returns image reference \"sha256:25c7652bd0d893b147dce9135dc6a68c37da76f9a20dceec1d520782031b2f36\"" Sep 9 21:15:30.369230 containerd[1524]: time="2025-09-09T21:15:30.369208718Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 9 21:15:30.535710 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 9 21:15:30.537023 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:15:30.671953 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:15:30.675895 (kubelet)[2051]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 21:15:30.753809 kubelet[2051]: E0909 21:15:30.753757 2051 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 21:15:30.756739 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 21:15:30.756864 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 21:15:30.757682 systemd[1]: kubelet.service: Consumed 147ms CPU time, 108.3M memory peak. Sep 9 21:15:30.905239 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3836412665.mount: Deactivated successfully. Sep 9 21:15:31.565180 containerd[1524]: time="2025-09-09T21:15:31.565135850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:31.566157 containerd[1524]: time="2025-09-09T21:15:31.566077580Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 9 21:15:31.567793 containerd[1524]: time="2025-09-09T21:15:31.567734379Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:31.570978 containerd[1524]: time="2025-09-09T21:15:31.570436391Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:31.571555 containerd[1524]: time="2025-09-09T21:15:31.571521907Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.202282769s" Sep 9 21:15:31.571653 containerd[1524]: time="2025-09-09T21:15:31.571638638Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 9 21:15:31.572156 containerd[1524]: time="2025-09-09T21:15:31.572123548Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 9 21:15:32.118530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3652688667.mount: Deactivated successfully. Sep 9 21:15:32.123586 containerd[1524]: time="2025-09-09T21:15:32.123248426Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:15:32.123950 containerd[1524]: time="2025-09-09T21:15:32.123761172Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 9 21:15:32.124655 containerd[1524]: time="2025-09-09T21:15:32.124620924Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:15:32.126486 containerd[1524]: time="2025-09-09T21:15:32.126450431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 9 21:15:32.127160 containerd[1524]: time="2025-09-09T21:15:32.127118884Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 554.870443ms" Sep 9 21:15:32.127262 containerd[1524]: time="2025-09-09T21:15:32.127245289Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 9 21:15:32.127836 containerd[1524]: time="2025-09-09T21:15:32.127810110Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 9 21:15:32.612368 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2341717233.mount: Deactivated successfully. Sep 9 21:15:33.875042 containerd[1524]: time="2025-09-09T21:15:33.874999309Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:33.876544 containerd[1524]: time="2025-09-09T21:15:33.876504353Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 9 21:15:33.878355 containerd[1524]: time="2025-09-09T21:15:33.878317846Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:33.882312 containerd[1524]: time="2025-09-09T21:15:33.882272884Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:15:33.882931 containerd[1524]: time="2025-09-09T21:15:33.882886677Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.755043678s" Sep 9 21:15:33.882931 containerd[1524]: time="2025-09-09T21:15:33.882911614Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 9 21:15:39.391247 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:15:39.391391 systemd[1]: kubelet.service: Consumed 147ms CPU time, 108.3M memory peak. Sep 9 21:15:39.393255 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:15:39.413005 systemd[1]: Reload requested from client PID 2196 ('systemctl') (unit session-7.scope)... Sep 9 21:15:39.413019 systemd[1]: Reloading... Sep 9 21:15:39.474598 zram_generator::config[2240]: No configuration found. Sep 9 21:15:39.631068 systemd[1]: Reloading finished in 217 ms. Sep 9 21:15:39.675003 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 9 21:15:39.675076 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 9 21:15:39.675308 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:15:39.675352 systemd[1]: kubelet.service: Consumed 87ms CPU time, 95.1M memory peak. Sep 9 21:15:39.678703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:15:39.789294 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:15:39.792678 (kubelet)[2285]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 21:15:39.827659 kubelet[2285]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:15:39.827659 kubelet[2285]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 21:15:39.827659 kubelet[2285]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:15:39.827956 kubelet[2285]: I0909 21:15:39.827747 2285 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 21:15:40.402111 kubelet[2285]: I0909 21:15:40.402067 2285 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 21:15:40.402111 kubelet[2285]: I0909 21:15:40.402097 2285 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 21:15:40.402447 kubelet[2285]: I0909 21:15:40.402421 2285 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 21:15:40.419475 kubelet[2285]: E0909 21:15:40.419429 2285 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.61:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:15:40.420628 kubelet[2285]: I0909 21:15:40.420603 2285 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 21:15:40.427721 kubelet[2285]: I0909 21:15:40.427701 2285 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 21:15:40.431381 kubelet[2285]: I0909 21:15:40.431024 2285 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 21:15:40.431865 kubelet[2285]: I0909 21:15:40.431846 2285 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 21:15:40.432081 kubelet[2285]: I0909 21:15:40.432052 2285 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 21:15:40.432300 kubelet[2285]: I0909 21:15:40.432129 2285 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 21:15:40.432434 kubelet[2285]: I0909 21:15:40.432420 2285 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 21:15:40.432486 kubelet[2285]: I0909 21:15:40.432478 2285 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 21:15:40.432769 kubelet[2285]: I0909 21:15:40.432750 2285 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:15:40.435003 kubelet[2285]: I0909 21:15:40.434981 2285 kubelet.go:408] "Attempting to sync node with API server" Sep 9 21:15:40.435097 kubelet[2285]: I0909 21:15:40.435087 2285 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 21:15:40.435170 kubelet[2285]: I0909 21:15:40.435160 2285 kubelet.go:314] "Adding apiserver pod source" Sep 9 21:15:40.435267 kubelet[2285]: I0909 21:15:40.435258 2285 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 21:15:40.439048 kubelet[2285]: I0909 21:15:40.439029 2285 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 21:15:40.440556 kubelet[2285]: W0909 21:15:40.440028 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Sep 9 21:15:40.440556 kubelet[2285]: E0909 21:15:40.440105 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.61:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:15:40.440556 kubelet[2285]: W0909 21:15:40.440414 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Sep 9 21:15:40.440556 kubelet[2285]: E0909 21:15:40.440461 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.61:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:15:40.440731 kubelet[2285]: I0909 21:15:40.440613 2285 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 21:15:40.440842 kubelet[2285]: W0909 21:15:40.440824 2285 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 9 21:15:40.442527 kubelet[2285]: I0909 21:15:40.442475 2285 server.go:1274] "Started kubelet" Sep 9 21:15:40.442846 kubelet[2285]: I0909 21:15:40.442774 2285 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 21:15:40.443109 kubelet[2285]: I0909 21:15:40.443023 2285 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 21:15:40.443328 kubelet[2285]: I0909 21:15:40.443278 2285 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 21:15:40.445206 kubelet[2285]: I0909 21:15:40.445168 2285 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 21:15:40.446411 kubelet[2285]: I0909 21:15:40.446373 2285 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 21:15:40.447891 kubelet[2285]: I0909 21:15:40.447861 2285 server.go:449] "Adding debug handlers to kubelet server" Sep 9 21:15:40.447955 kubelet[2285]: I0909 21:15:40.447922 2285 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 21:15:40.448125 kubelet[2285]: E0909 21:15:40.448106 2285 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:15:40.448184 kubelet[2285]: E0909 21:15:40.445500 2285 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.61:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.61:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863b9c62eae0e36 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-09 21:15:40.442447414 +0000 UTC m=+0.647030395,LastTimestamp:2025-09-09 21:15:40.442447414 +0000 UTC m=+0.647030395,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 9 21:15:40.448751 kubelet[2285]: I0909 21:15:40.448731 2285 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 21:15:40.448810 kubelet[2285]: I0909 21:15:40.448797 2285 reconciler.go:26] "Reconciler: start to sync state" Sep 9 21:15:40.448835 kubelet[2285]: E0909 21:15:40.448789 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="200ms" Sep 9 21:15:40.449169 kubelet[2285]: W0909 21:15:40.449133 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Sep 9 21:15:40.449211 kubelet[2285]: E0909 21:15:40.449175 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.61:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:15:40.449281 kubelet[2285]: E0909 21:15:40.449252 2285 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 21:15:40.449435 kubelet[2285]: I0909 21:15:40.449418 2285 factory.go:221] Registration of the systemd container factory successfully Sep 9 21:15:40.449598 kubelet[2285]: I0909 21:15:40.449579 2285 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 21:15:40.450812 kubelet[2285]: I0909 21:15:40.450790 2285 factory.go:221] Registration of the containerd container factory successfully Sep 9 21:15:40.461082 kubelet[2285]: I0909 21:15:40.460960 2285 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 21:15:40.461961 kubelet[2285]: I0909 21:15:40.461944 2285 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 21:15:40.462033 kubelet[2285]: I0909 21:15:40.462025 2285 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 21:15:40.462089 kubelet[2285]: I0909 21:15:40.462081 2285 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 21:15:40.462222 kubelet[2285]: E0909 21:15:40.462192 2285 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 21:15:40.465598 kubelet[2285]: W0909 21:15:40.465402 2285 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.61:6443: connect: connection refused Sep 9 21:15:40.465598 kubelet[2285]: E0909 21:15:40.465454 2285 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.61:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.61:6443: connect: connection refused" logger="UnhandledError" Sep 9 21:15:40.465876 kubelet[2285]: I0909 21:15:40.465853 2285 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 21:15:40.465876 kubelet[2285]: I0909 21:15:40.465869 2285 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 21:15:40.465931 kubelet[2285]: I0909 21:15:40.465887 2285 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:15:40.548309 kubelet[2285]: E0909 21:15:40.548255 2285 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:15:40.562592 kubelet[2285]: E0909 21:15:40.562552 2285 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 9 21:15:40.573933 kubelet[2285]: I0909 21:15:40.573909 2285 policy_none.go:49] "None policy: Start" Sep 9 21:15:40.574783 kubelet[2285]: I0909 21:15:40.574764 2285 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 21:15:40.574848 kubelet[2285]: I0909 21:15:40.574791 2285 state_mem.go:35] "Initializing new in-memory state store" Sep 9 21:15:40.582942 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 9 21:15:40.599370 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 9 21:15:40.602595 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 9 21:15:40.626435 kubelet[2285]: I0909 21:15:40.626389 2285 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 21:15:40.626801 kubelet[2285]: I0909 21:15:40.626609 2285 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 21:15:40.626801 kubelet[2285]: I0909 21:15:40.626625 2285 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 21:15:40.626895 kubelet[2285]: I0909 21:15:40.626866 2285 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 21:15:40.628056 kubelet[2285]: E0909 21:15:40.628019 2285 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 9 21:15:40.649233 kubelet[2285]: E0909 21:15:40.649185 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="400ms" Sep 9 21:15:40.729662 kubelet[2285]: I0909 21:15:40.729511 2285 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:15:40.730880 kubelet[2285]: E0909 21:15:40.730835 2285 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Sep 9 21:15:40.772452 systemd[1]: Created slice kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice - libcontainer container kubepods-burstable-podfec3f691a145cb26ff55e4af388500b7.slice. Sep 9 21:15:40.789622 systemd[1]: Created slice kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice - libcontainer container kubepods-burstable-pod5dc878868de11c6196259ae42039f4ff.slice. Sep 9 21:15:40.812163 systemd[1]: Created slice kubepods-burstable-pod57edd24af1f1a60ebec264cff4d8b05f.slice - libcontainer container kubepods-burstable-pod57edd24af1f1a60ebec264cff4d8b05f.slice. Sep 9 21:15:40.851250 kubelet[2285]: I0909 21:15:40.851195 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:15:40.851626 kubelet[2285]: I0909 21:15:40.851296 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:15:40.851626 kubelet[2285]: I0909 21:15:40.851323 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 21:15:40.851626 kubelet[2285]: I0909 21:15:40.851340 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57edd24af1f1a60ebec264cff4d8b05f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"57edd24af1f1a60ebec264cff4d8b05f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:15:40.851626 kubelet[2285]: I0909 21:15:40.851356 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57edd24af1f1a60ebec264cff4d8b05f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"57edd24af1f1a60ebec264cff4d8b05f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:15:40.851626 kubelet[2285]: I0909 21:15:40.851371 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:15:40.851746 kubelet[2285]: I0909 21:15:40.851385 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:15:40.851746 kubelet[2285]: I0909 21:15:40.851398 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57edd24af1f1a60ebec264cff4d8b05f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"57edd24af1f1a60ebec264cff4d8b05f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:15:40.851746 kubelet[2285]: I0909 21:15:40.851413 2285 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:15:40.932549 kubelet[2285]: I0909 21:15:40.932506 2285 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:15:40.932964 kubelet[2285]: E0909 21:15:40.932931 2285 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Sep 9 21:15:41.050165 kubelet[2285]: E0909 21:15:41.050055 2285 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.61:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.61:6443: connect: connection refused" interval="800ms" Sep 9 21:15:41.088420 kubelet[2285]: E0909 21:15:41.088383 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:41.089046 containerd[1524]: time="2025-09-09T21:15:41.089011169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,}" Sep 9 21:15:41.105734 containerd[1524]: time="2025-09-09T21:15:41.105685764Z" level=info msg="connecting to shim f44f06d7dfa3f6fd34728ec081f206642137fe3d3db8778f3940682163b6718b" address="unix:///run/containerd/s/b3712ee1a5ca3109b8cf917aaf38bead74dfc379171482dc7e23dc289db77dcb" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:15:41.110381 kubelet[2285]: E0909 21:15:41.110138 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:41.111743 containerd[1524]: time="2025-09-09T21:15:41.111703714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,}" Sep 9 21:15:41.118113 kubelet[2285]: E0909 21:15:41.118060 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:41.118717 containerd[1524]: time="2025-09-09T21:15:41.118646599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:57edd24af1f1a60ebec264cff4d8b05f,Namespace:kube-system,Attempt:0,}" Sep 9 21:15:41.134589 containerd[1524]: time="2025-09-09T21:15:41.134450422Z" level=info msg="connecting to shim 404d3db6803670e3613d7c5b59725711360393eeecfc63e77e39168338382e0a" address="unix:///run/containerd/s/0b6343a5e9dd645bbd4d8c186303fcb44e4e87193f17fbca44dcbbd614c17196" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:15:41.135734 systemd[1]: Started cri-containerd-f44f06d7dfa3f6fd34728ec081f206642137fe3d3db8778f3940682163b6718b.scope - libcontainer container f44f06d7dfa3f6fd34728ec081f206642137fe3d3db8778f3940682163b6718b. Sep 9 21:15:41.144406 containerd[1524]: time="2025-09-09T21:15:41.144268012Z" level=info msg="connecting to shim 6a5c3435fd54dd1392da95ffbed191abc1e8ecdda60b39a831f91129da634d1e" address="unix:///run/containerd/s/32e1e657ba216fa7f2ef30226d04fde6f8ec49fcbfe77891173e63cabd35f424" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:15:41.171363 systemd[1]: Started cri-containerd-404d3db6803670e3613d7c5b59725711360393eeecfc63e77e39168338382e0a.scope - libcontainer container 404d3db6803670e3613d7c5b59725711360393eeecfc63e77e39168338382e0a. Sep 9 21:15:41.172746 systemd[1]: Started cri-containerd-6a5c3435fd54dd1392da95ffbed191abc1e8ecdda60b39a831f91129da634d1e.scope - libcontainer container 6a5c3435fd54dd1392da95ffbed191abc1e8ecdda60b39a831f91129da634d1e. Sep 9 21:15:41.181982 containerd[1524]: time="2025-09-09T21:15:41.181837808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:fec3f691a145cb26ff55e4af388500b7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f44f06d7dfa3f6fd34728ec081f206642137fe3d3db8778f3940682163b6718b\"" Sep 9 21:15:41.182971 kubelet[2285]: E0909 21:15:41.182946 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:41.185693 containerd[1524]: time="2025-09-09T21:15:41.185320135Z" level=info msg="CreateContainer within sandbox \"f44f06d7dfa3f6fd34728ec081f206642137fe3d3db8778f3940682163b6718b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 9 21:15:41.194837 containerd[1524]: time="2025-09-09T21:15:41.194755263Z" level=info msg="Container 7e3522b0f098ca22fa258d347762ddcf864ba75b87e3110046237b19c78ad08e: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:15:41.205052 containerd[1524]: time="2025-09-09T21:15:41.205009676Z" level=info msg="CreateContainer within sandbox \"f44f06d7dfa3f6fd34728ec081f206642137fe3d3db8778f3940682163b6718b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7e3522b0f098ca22fa258d347762ddcf864ba75b87e3110046237b19c78ad08e\"" Sep 9 21:15:41.207113 containerd[1524]: time="2025-09-09T21:15:41.207064697Z" level=info msg="StartContainer for \"7e3522b0f098ca22fa258d347762ddcf864ba75b87e3110046237b19c78ad08e\"" Sep 9 21:15:41.208145 containerd[1524]: time="2025-09-09T21:15:41.208115694Z" level=info msg="connecting to shim 7e3522b0f098ca22fa258d347762ddcf864ba75b87e3110046237b19c78ad08e" address="unix:///run/containerd/s/b3712ee1a5ca3109b8cf917aaf38bead74dfc379171482dc7e23dc289db77dcb" protocol=ttrpc version=3 Sep 9 21:15:41.217551 containerd[1524]: time="2025-09-09T21:15:41.217441697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:57edd24af1f1a60ebec264cff4d8b05f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6a5c3435fd54dd1392da95ffbed191abc1e8ecdda60b39a831f91129da634d1e\"" Sep 9 21:15:41.218302 kubelet[2285]: E0909 21:15:41.218252 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:41.220787 containerd[1524]: time="2025-09-09T21:15:41.220752906Z" level=info msg="CreateContainer within sandbox \"6a5c3435fd54dd1392da95ffbed191abc1e8ecdda60b39a831f91129da634d1e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 9 21:15:41.226416 containerd[1524]: time="2025-09-09T21:15:41.226375853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:5dc878868de11c6196259ae42039f4ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"404d3db6803670e3613d7c5b59725711360393eeecfc63e77e39168338382e0a\"" Sep 9 21:15:41.227316 kubelet[2285]: E0909 21:15:41.227147 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:41.229223 containerd[1524]: time="2025-09-09T21:15:41.229170750Z" level=info msg="CreateContainer within sandbox \"404d3db6803670e3613d7c5b59725711360393eeecfc63e77e39168338382e0a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 9 21:15:41.231779 containerd[1524]: time="2025-09-09T21:15:41.231749112Z" level=info msg="Container 6f638e097146b22adcc293af4f5ef7e827e8fae95e0343bae5f2371b587205a6: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:15:41.237117 containerd[1524]: time="2025-09-09T21:15:41.237079711Z" level=info msg="Container 8daeb24cddae1407262c1a7de5a8ed775d49d2ce667b49d78bc009a8c6bb01ad: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:15:41.237741 systemd[1]: Started cri-containerd-7e3522b0f098ca22fa258d347762ddcf864ba75b87e3110046237b19c78ad08e.scope - libcontainer container 7e3522b0f098ca22fa258d347762ddcf864ba75b87e3110046237b19c78ad08e. Sep 9 21:15:41.239226 containerd[1524]: time="2025-09-09T21:15:41.239187897Z" level=info msg="CreateContainer within sandbox \"6a5c3435fd54dd1392da95ffbed191abc1e8ecdda60b39a831f91129da634d1e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6f638e097146b22adcc293af4f5ef7e827e8fae95e0343bae5f2371b587205a6\"" Sep 9 21:15:41.239710 containerd[1524]: time="2025-09-09T21:15:41.239684157Z" level=info msg="StartContainer for \"6f638e097146b22adcc293af4f5ef7e827e8fae95e0343bae5f2371b587205a6\"" Sep 9 21:15:41.242576 containerd[1524]: time="2025-09-09T21:15:41.242395132Z" level=info msg="connecting to shim 6f638e097146b22adcc293af4f5ef7e827e8fae95e0343bae5f2371b587205a6" address="unix:///run/containerd/s/32e1e657ba216fa7f2ef30226d04fde6f8ec49fcbfe77891173e63cabd35f424" protocol=ttrpc version=3 Sep 9 21:15:41.248826 containerd[1524]: time="2025-09-09T21:15:41.248752044Z" level=info msg="CreateContainer within sandbox \"404d3db6803670e3613d7c5b59725711360393eeecfc63e77e39168338382e0a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8daeb24cddae1407262c1a7de5a8ed775d49d2ce667b49d78bc009a8c6bb01ad\"" Sep 9 21:15:41.249351 containerd[1524]: time="2025-09-09T21:15:41.249324716Z" level=info msg="StartContainer for \"8daeb24cddae1407262c1a7de5a8ed775d49d2ce667b49d78bc009a8c6bb01ad\"" Sep 9 21:15:41.250314 containerd[1524]: time="2025-09-09T21:15:41.250291912Z" level=info msg="connecting to shim 8daeb24cddae1407262c1a7de5a8ed775d49d2ce667b49d78bc009a8c6bb01ad" address="unix:///run/containerd/s/0b6343a5e9dd645bbd4d8c186303fcb44e4e87193f17fbca44dcbbd614c17196" protocol=ttrpc version=3 Sep 9 21:15:41.275718 systemd[1]: Started cri-containerd-6f638e097146b22adcc293af4f5ef7e827e8fae95e0343bae5f2371b587205a6.scope - libcontainer container 6f638e097146b22adcc293af4f5ef7e827e8fae95e0343bae5f2371b587205a6. Sep 9 21:15:41.276782 systemd[1]: Started cri-containerd-8daeb24cddae1407262c1a7de5a8ed775d49d2ce667b49d78bc009a8c6bb01ad.scope - libcontainer container 8daeb24cddae1407262c1a7de5a8ed775d49d2ce667b49d78bc009a8c6bb01ad. Sep 9 21:15:41.286143 containerd[1524]: time="2025-09-09T21:15:41.286103548Z" level=info msg="StartContainer for \"7e3522b0f098ca22fa258d347762ddcf864ba75b87e3110046237b19c78ad08e\" returns successfully" Sep 9 21:15:41.316158 containerd[1524]: time="2025-09-09T21:15:41.315972009Z" level=info msg="StartContainer for \"6f638e097146b22adcc293af4f5ef7e827e8fae95e0343bae5f2371b587205a6\" returns successfully" Sep 9 21:15:41.324675 containerd[1524]: time="2025-09-09T21:15:41.324636825Z" level=info msg="StartContainer for \"8daeb24cddae1407262c1a7de5a8ed775d49d2ce667b49d78bc009a8c6bb01ad\" returns successfully" Sep 9 21:15:41.334464 kubelet[2285]: I0909 21:15:41.334432 2285 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:15:41.335121 kubelet[2285]: E0909 21:15:41.335093 2285 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.61:6443/api/v1/nodes\": dial tcp 10.0.0.61:6443: connect: connection refused" node="localhost" Sep 9 21:15:41.471644 kubelet[2285]: E0909 21:15:41.471616 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:41.476920 kubelet[2285]: E0909 21:15:41.476891 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:41.478488 kubelet[2285]: E0909 21:15:41.478469 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:42.137074 kubelet[2285]: I0909 21:15:42.137043 2285 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:15:42.483825 kubelet[2285]: E0909 21:15:42.483737 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:42.484323 kubelet[2285]: E0909 21:15:42.484236 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:42.663480 kubelet[2285]: E0909 21:15:42.663414 2285 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 9 21:15:42.739435 kubelet[2285]: I0909 21:15:42.739118 2285 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 21:15:42.739435 kubelet[2285]: E0909 21:15:42.739152 2285 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 9 21:15:43.439491 kubelet[2285]: I0909 21:15:43.439431 2285 apiserver.go:52] "Watching apiserver" Sep 9 21:15:43.449143 kubelet[2285]: I0909 21:15:43.449119 2285 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 21:15:43.489034 kubelet[2285]: E0909 21:15:43.489005 2285 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 9 21:15:43.489193 kubelet[2285]: E0909 21:15:43.489155 2285 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:44.731346 systemd[1]: Reload requested from client PID 2565 ('systemctl') (unit session-7.scope)... Sep 9 21:15:44.731363 systemd[1]: Reloading... Sep 9 21:15:44.788643 zram_generator::config[2611]: No configuration found. Sep 9 21:15:45.025813 systemd[1]: Reloading finished in 294 ms. Sep 9 21:15:45.052522 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:15:45.061040 systemd[1]: kubelet.service: Deactivated successfully. Sep 9 21:15:45.062620 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:15:45.062677 systemd[1]: kubelet.service: Consumed 992ms CPU time, 126.6M memory peak. Sep 9 21:15:45.064165 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 21:15:45.206469 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 21:15:45.211347 (kubelet)[2650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 9 21:15:45.246865 kubelet[2650]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:15:45.246865 kubelet[2650]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 9 21:15:45.246865 kubelet[2650]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 9 21:15:45.247447 kubelet[2650]: I0909 21:15:45.246993 2650 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 9 21:15:45.253067 kubelet[2650]: I0909 21:15:45.253027 2650 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 9 21:15:45.253067 kubelet[2650]: I0909 21:15:45.253057 2650 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 9 21:15:45.253269 kubelet[2650]: I0909 21:15:45.253253 2650 server.go:934] "Client rotation is on, will bootstrap in background" Sep 9 21:15:45.254510 kubelet[2650]: I0909 21:15:45.254481 2650 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 9 21:15:45.256427 kubelet[2650]: I0909 21:15:45.256406 2650 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 9 21:15:45.260777 kubelet[2650]: I0909 21:15:45.260751 2650 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 9 21:15:45.263127 kubelet[2650]: I0909 21:15:45.263107 2650 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 9 21:15:45.263222 kubelet[2650]: I0909 21:15:45.263202 2650 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 9 21:15:45.263314 kubelet[2650]: I0909 21:15:45.263292 2650 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 9 21:15:45.263474 kubelet[2650]: I0909 21:15:45.263314 2650 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 9 21:15:45.263474 kubelet[2650]: I0909 21:15:45.263473 2650 topology_manager.go:138] "Creating topology manager with none policy" Sep 9 21:15:45.263611 kubelet[2650]: I0909 21:15:45.263483 2650 container_manager_linux.go:300] "Creating device plugin manager" Sep 9 21:15:45.263611 kubelet[2650]: I0909 21:15:45.263513 2650 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:15:45.263707 kubelet[2650]: I0909 21:15:45.263632 2650 kubelet.go:408] "Attempting to sync node with API server" Sep 9 21:15:45.263707 kubelet[2650]: I0909 21:15:45.263645 2650 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 9 21:15:45.263707 kubelet[2650]: I0909 21:15:45.263660 2650 kubelet.go:314] "Adding apiserver pod source" Sep 9 21:15:45.263707 kubelet[2650]: I0909 21:15:45.263672 2650 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 9 21:15:45.264176 kubelet[2650]: I0909 21:15:45.264070 2650 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 9 21:15:45.264695 kubelet[2650]: I0909 21:15:45.264680 2650 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 9 21:15:45.265214 kubelet[2650]: I0909 21:15:45.265201 2650 server.go:1274] "Started kubelet" Sep 9 21:15:45.265574 kubelet[2650]: I0909 21:15:45.265519 2650 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 9 21:15:45.265808 kubelet[2650]: I0909 21:15:45.265689 2650 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 9 21:15:45.266058 kubelet[2650]: I0909 21:15:45.266032 2650 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 9 21:15:45.267328 kubelet[2650]: I0909 21:15:45.267299 2650 server.go:449] "Adding debug handlers to kubelet server" Sep 9 21:15:45.269345 kubelet[2650]: I0909 21:15:45.269329 2650 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 9 21:15:45.269716 kubelet[2650]: I0909 21:15:45.269669 2650 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 9 21:15:45.270786 kubelet[2650]: E0909 21:15:45.270744 2650 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 9 21:15:45.271037 kubelet[2650]: I0909 21:15:45.270790 2650 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 9 21:15:45.271037 kubelet[2650]: E0909 21:15:45.270986 2650 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 9 21:15:45.271104 kubelet[2650]: I0909 21:15:45.271066 2650 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 9 21:15:45.271391 kubelet[2650]: I0909 21:15:45.271366 2650 reconciler.go:26] "Reconciler: start to sync state" Sep 9 21:15:45.271391 kubelet[2650]: I0909 21:15:45.271376 2650 factory.go:221] Registration of the systemd container factory successfully Sep 9 21:15:45.271499 kubelet[2650]: I0909 21:15:45.271471 2650 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 9 21:15:45.289554 kubelet[2650]: I0909 21:15:45.287718 2650 factory.go:221] Registration of the containerd container factory successfully Sep 9 21:15:45.290804 kubelet[2650]: I0909 21:15:45.290772 2650 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 9 21:15:45.291656 kubelet[2650]: I0909 21:15:45.291633 2650 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 9 21:15:45.291691 kubelet[2650]: I0909 21:15:45.291658 2650 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 9 21:15:45.291691 kubelet[2650]: I0909 21:15:45.291673 2650 kubelet.go:2321] "Starting kubelet main sync loop" Sep 9 21:15:45.291738 kubelet[2650]: E0909 21:15:45.291721 2650 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 9 21:15:45.316043 kubelet[2650]: I0909 21:15:45.316023 2650 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 9 21:15:45.316043 kubelet[2650]: I0909 21:15:45.316040 2650 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 9 21:15:45.316141 kubelet[2650]: I0909 21:15:45.316060 2650 state_mem.go:36] "Initialized new in-memory state store" Sep 9 21:15:45.316204 kubelet[2650]: I0909 21:15:45.316187 2650 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 9 21:15:45.316231 kubelet[2650]: I0909 21:15:45.316203 2650 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 9 21:15:45.316231 kubelet[2650]: I0909 21:15:45.316219 2650 policy_none.go:49] "None policy: Start" Sep 9 21:15:45.316773 kubelet[2650]: I0909 21:15:45.316751 2650 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 9 21:15:45.316773 kubelet[2650]: I0909 21:15:45.316770 2650 state_mem.go:35] "Initializing new in-memory state store" Sep 9 21:15:45.316905 kubelet[2650]: I0909 21:15:45.316887 2650 state_mem.go:75] "Updated machine memory state" Sep 9 21:15:45.320590 kubelet[2650]: I0909 21:15:45.320495 2650 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 9 21:15:45.321146 kubelet[2650]: I0909 21:15:45.321114 2650 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 9 21:15:45.321204 kubelet[2650]: I0909 21:15:45.321136 2650 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 9 21:15:45.321406 kubelet[2650]: I0909 21:15:45.321393 2650 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 9 21:15:45.424491 kubelet[2650]: I0909 21:15:45.424435 2650 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 9 21:15:45.430162 kubelet[2650]: I0909 21:15:45.430134 2650 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 9 21:15:45.430259 kubelet[2650]: I0909 21:15:45.430239 2650 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 9 21:15:45.472056 kubelet[2650]: I0909 21:15:45.472033 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:15:45.472217 kubelet[2650]: I0909 21:15:45.472062 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:15:45.474658 kubelet[2650]: I0909 21:15:45.472117 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:15:45.474658 kubelet[2650]: I0909 21:15:45.474656 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:15:45.474761 kubelet[2650]: I0909 21:15:45.474686 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57edd24af1f1a60ebec264cff4d8b05f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"57edd24af1f1a60ebec264cff4d8b05f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:15:45.474761 kubelet[2650]: I0909 21:15:45.474706 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57edd24af1f1a60ebec264cff4d8b05f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"57edd24af1f1a60ebec264cff4d8b05f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:15:45.474761 kubelet[2650]: I0909 21:15:45.474730 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57edd24af1f1a60ebec264cff4d8b05f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"57edd24af1f1a60ebec264cff4d8b05f\") " pod="kube-system/kube-apiserver-localhost" Sep 9 21:15:45.474761 kubelet[2650]: I0909 21:15:45.474747 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fec3f691a145cb26ff55e4af388500b7-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"fec3f691a145cb26ff55e4af388500b7\") " pod="kube-system/kube-controller-manager-localhost" Sep 9 21:15:45.474852 kubelet[2650]: I0909 21:15:45.474781 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dc878868de11c6196259ae42039f4ff-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"5dc878868de11c6196259ae42039f4ff\") " pod="kube-system/kube-scheduler-localhost" Sep 9 21:15:45.698499 kubelet[2650]: E0909 21:15:45.698391 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:45.698860 kubelet[2650]: E0909 21:15:45.698404 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:45.698860 kubelet[2650]: E0909 21:15:45.698452 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:45.732024 sudo[2688]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 9 21:15:45.732315 sudo[2688]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 9 21:15:46.048499 sudo[2688]: pam_unix(sudo:session): session closed for user root Sep 9 21:15:46.264298 kubelet[2650]: I0909 21:15:46.264247 2650 apiserver.go:52] "Watching apiserver" Sep 9 21:15:46.271877 kubelet[2650]: I0909 21:15:46.271849 2650 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 9 21:15:46.305491 kubelet[2650]: E0909 21:15:46.305387 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:46.305491 kubelet[2650]: E0909 21:15:46.305397 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:46.309765 kubelet[2650]: E0909 21:15:46.309734 2650 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 9 21:15:46.309888 kubelet[2650]: E0909 21:15:46.309861 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:46.334542 kubelet[2650]: I0909 21:15:46.334484 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.3344709940000001 podStartE2EDuration="1.334470994s" podCreationTimestamp="2025-09-09 21:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:15:46.334315786 +0000 UTC m=+1.120159882" watchObservedRunningTime="2025-09-09 21:15:46.334470994 +0000 UTC m=+1.120315050" Sep 9 21:15:46.334926 kubelet[2650]: I0909 21:15:46.334886 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.334875741 podStartE2EDuration="1.334875741s" podCreationTimestamp="2025-09-09 21:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:15:46.327790704 +0000 UTC m=+1.113634880" watchObservedRunningTime="2025-09-09 21:15:46.334875741 +0000 UTC m=+1.120719877" Sep 9 21:15:46.341978 kubelet[2650]: I0909 21:15:46.341924 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3419112530000001 podStartE2EDuration="1.341911253s" podCreationTimestamp="2025-09-09 21:15:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:15:46.340031453 +0000 UTC m=+1.125875589" watchObservedRunningTime="2025-09-09 21:15:46.341911253 +0000 UTC m=+1.127755349" Sep 9 21:15:47.306524 kubelet[2650]: E0909 21:15:47.306492 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:47.368493 sudo[1734]: pam_unix(sudo:session): session closed for user root Sep 9 21:15:47.370363 sshd[1733]: Connection closed by 10.0.0.1 port 49146 Sep 9 21:15:47.370021 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Sep 9 21:15:47.372795 systemd[1]: sshd@6-10.0.0.61:22-10.0.0.1:49146.service: Deactivated successfully. Sep 9 21:15:47.374750 systemd[1]: session-7.scope: Deactivated successfully. Sep 9 21:15:47.374965 systemd[1]: session-7.scope: Consumed 7.194s CPU time, 259M memory peak. Sep 9 21:15:47.376762 systemd-logind[1501]: Session 7 logged out. Waiting for processes to exit. Sep 9 21:15:47.378648 systemd-logind[1501]: Removed session 7. Sep 9 21:15:50.348578 kubelet[2650]: I0909 21:15:50.348528 2650 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 9 21:15:50.349659 kubelet[2650]: I0909 21:15:50.349006 2650 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 9 21:15:50.349708 containerd[1524]: time="2025-09-09T21:15:50.348836844Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 9 21:15:50.412150 kubelet[2650]: E0909 21:15:50.411737 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:51.001592 systemd[1]: Created slice kubepods-besteffort-pod0677283a_f6d2_4348_9c4d_6b60d0ee655d.slice - libcontainer container kubepods-besteffort-pod0677283a_f6d2_4348_9c4d_6b60d0ee655d.slice. Sep 9 21:15:51.009636 kubelet[2650]: I0909 21:15:51.007979 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-host-proc-sys-kernel\") pod \"cilium-qgdb6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " pod="kube-system/cilium-qgdb6" Sep 9 21:15:51.009636 kubelet[2650]: I0909 21:15:51.008016 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-hostproc\") pod \"cilium-qgdb6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " pod="kube-system/cilium-qgdb6" Sep 9 21:15:51.009636 kubelet[2650]: I0909 21:15:51.008034 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-xtables-lock\") pod \"cilium-qgdb6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " pod="kube-system/cilium-qgdb6" Sep 9 21:15:51.009636 kubelet[2650]: I0909 21:15:51.008049 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1252fff5-664a-44d8-975a-0410271d86a6-hubble-tls\") pod \"cilium-qgdb6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " pod="kube-system/cilium-qgdb6" Sep 9 21:15:51.009636 kubelet[2650]: I0909 21:15:51.008067 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzmvd\" (UniqueName: \"kubernetes.io/projected/0677283a-f6d2-4348-9c4d-6b60d0ee655d-kube-api-access-wzmvd\") pod \"kube-proxy-4n648\" (UID: \"0677283a-f6d2-4348-9c4d-6b60d0ee655d\") " pod="kube-system/kube-proxy-4n648" Sep 9 21:15:51.009636 kubelet[2650]: I0909 21:15:51.008082 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-bpf-maps\") pod \"cilium-qgdb6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " pod="kube-system/cilium-qgdb6" Sep 9 21:15:51.009897 kubelet[2650]: I0909 21:15:51.008096 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-cni-path\") pod \"cilium-qgdb6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " pod="kube-system/cilium-qgdb6" Sep 9 21:15:51.009897 kubelet[2650]: I0909 21:15:51.008110 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6krgs\" (UniqueName: \"kubernetes.io/projected/1252fff5-664a-44d8-975a-0410271d86a6-kube-api-access-6krgs\") pod \"cilium-qgdb6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " pod="kube-system/cilium-qgdb6" Sep 9 21:15:51.009897 kubelet[2650]: I0909 21:15:51.008165 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0677283a-f6d2-4348-9c4d-6b60d0ee655d-xtables-lock\") pod \"kube-proxy-4n648\" (UID: \"0677283a-f6d2-4348-9c4d-6b60d0ee655d\") " pod="kube-system/kube-proxy-4n648" Sep 9 21:15:51.009897 kubelet[2650]: I0909 21:15:51.008203 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1252fff5-664a-44d8-975a-0410271d86a6-clustermesh-secrets\") pod \"cilium-qgdb6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " pod="kube-system/cilium-qgdb6" Sep 9 21:15:51.009897 kubelet[2650]: I0909 21:15:51.008223 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0677283a-f6d2-4348-9c4d-6b60d0ee655d-lib-modules\") pod \"kube-proxy-4n648\" (UID: \"0677283a-f6d2-4348-9c4d-6b60d0ee655d\") " pod="kube-system/kube-proxy-4n648" Sep 9 21:15:51.009897 kubelet[2650]: I0909 21:15:51.008243 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-cilium-cgroup\") pod \"cilium-qgdb6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " pod="kube-system/cilium-qgdb6" Sep 9 21:15:51.010014 kubelet[2650]: I0909 21:15:51.008258 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-etc-cni-netd\") pod \"cilium-qgdb6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " pod="kube-system/cilium-qgdb6" Sep 9 21:15:51.010014 kubelet[2650]: I0909 21:15:51.008280 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-lib-modules\") pod \"cilium-qgdb6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " pod="kube-system/cilium-qgdb6" Sep 9 21:15:51.010014 kubelet[2650]: I0909 21:15:51.008315 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0677283a-f6d2-4348-9c4d-6b60d0ee655d-kube-proxy\") pod \"kube-proxy-4n648\" (UID: \"0677283a-f6d2-4348-9c4d-6b60d0ee655d\") " pod="kube-system/kube-proxy-4n648" Sep 9 21:15:51.010014 kubelet[2650]: I0909 21:15:51.008332 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-cilium-run\") pod \"cilium-qgdb6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " pod="kube-system/cilium-qgdb6" Sep 9 21:15:51.010014 kubelet[2650]: I0909 21:15:51.008354 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1252fff5-664a-44d8-975a-0410271d86a6-cilium-config-path\") pod \"cilium-qgdb6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " pod="kube-system/cilium-qgdb6" Sep 9 21:15:51.010014 kubelet[2650]: I0909 21:15:51.008370 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-host-proc-sys-net\") pod \"cilium-qgdb6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " pod="kube-system/cilium-qgdb6" Sep 9 21:15:51.016492 systemd[1]: Created slice kubepods-burstable-pod1252fff5_664a_44d8_975a_0410271d86a6.slice - libcontainer container kubepods-burstable-pod1252fff5_664a_44d8_975a_0410271d86a6.slice. Sep 9 21:15:51.313811 kubelet[2650]: E0909 21:15:51.313560 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:51.314614 containerd[1524]: time="2025-09-09T21:15:51.314179930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4n648,Uid:0677283a-f6d2-4348-9c4d-6b60d0ee655d,Namespace:kube-system,Attempt:0,}" Sep 9 21:15:51.322870 kubelet[2650]: E0909 21:15:51.322831 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:51.323352 containerd[1524]: time="2025-09-09T21:15:51.323248224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qgdb6,Uid:1252fff5-664a-44d8-975a-0410271d86a6,Namespace:kube-system,Attempt:0,}" Sep 9 21:15:51.381986 containerd[1524]: time="2025-09-09T21:15:51.381937762Z" level=info msg="connecting to shim fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636" address="unix:///run/containerd/s/8213ba28f66185ae5bde9491ac6c35b37551393119422e360bd89ea2e49bc957" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:15:51.382932 containerd[1524]: time="2025-09-09T21:15:51.382769251Z" level=info msg="connecting to shim ec6c096c688afee2632f7a07537562d6fcabf656cef87d72f9793eac2eec1019" address="unix:///run/containerd/s/4b219d25671542ec089201d1d561d609b09ff802529b75ce2b1c2993574f64bc" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:15:51.405734 systemd[1]: Created slice kubepods-besteffort-podcca4563c_beab_418e_ae3d_77f098b6fdc1.slice - libcontainer container kubepods-besteffort-podcca4563c_beab_418e_ae3d_77f098b6fdc1.slice. Sep 9 21:15:51.412558 kubelet[2650]: I0909 21:15:51.412466 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77qbh\" (UniqueName: \"kubernetes.io/projected/cca4563c-beab-418e-ae3d-77f098b6fdc1-kube-api-access-77qbh\") pod \"cilium-operator-5d85765b45-tvlh9\" (UID: \"cca4563c-beab-418e-ae3d-77f098b6fdc1\") " pod="kube-system/cilium-operator-5d85765b45-tvlh9" Sep 9 21:15:51.414577 kubelet[2650]: I0909 21:15:51.414219 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cca4563c-beab-418e-ae3d-77f098b6fdc1-cilium-config-path\") pod \"cilium-operator-5d85765b45-tvlh9\" (UID: \"cca4563c-beab-418e-ae3d-77f098b6fdc1\") " pod="kube-system/cilium-operator-5d85765b45-tvlh9" Sep 9 21:15:51.418719 systemd[1]: Started cri-containerd-ec6c096c688afee2632f7a07537562d6fcabf656cef87d72f9793eac2eec1019.scope - libcontainer container ec6c096c688afee2632f7a07537562d6fcabf656cef87d72f9793eac2eec1019. Sep 9 21:15:51.422596 systemd[1]: Started cri-containerd-fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636.scope - libcontainer container fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636. Sep 9 21:15:51.448318 containerd[1524]: time="2025-09-09T21:15:51.448283510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4n648,Uid:0677283a-f6d2-4348-9c4d-6b60d0ee655d,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec6c096c688afee2632f7a07537562d6fcabf656cef87d72f9793eac2eec1019\"" Sep 9 21:15:51.449253 kubelet[2650]: E0909 21:15:51.449227 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:51.451994 containerd[1524]: time="2025-09-09T21:15:51.451956646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qgdb6,Uid:1252fff5-664a-44d8-975a-0410271d86a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\"" Sep 9 21:15:51.452410 kubelet[2650]: E0909 21:15:51.452387 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:51.452965 containerd[1524]: time="2025-09-09T21:15:51.452852219Z" level=info msg="CreateContainer within sandbox \"ec6c096c688afee2632f7a07537562d6fcabf656cef87d72f9793eac2eec1019\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 9 21:15:51.453578 containerd[1524]: time="2025-09-09T21:15:51.453515738Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 9 21:15:51.465326 containerd[1524]: time="2025-09-09T21:15:51.465283912Z" level=info msg="Container 64e594990d4157d435f74bf9c1435f45098d8e6bf9bcd1c824096a13e12dcb38: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:15:51.471666 containerd[1524]: time="2025-09-09T21:15:51.471633486Z" level=info msg="CreateContainer within sandbox \"ec6c096c688afee2632f7a07537562d6fcabf656cef87d72f9793eac2eec1019\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"64e594990d4157d435f74bf9c1435f45098d8e6bf9bcd1c824096a13e12dcb38\"" Sep 9 21:15:51.472510 containerd[1524]: time="2025-09-09T21:15:51.472483816Z" level=info msg="StartContainer for \"64e594990d4157d435f74bf9c1435f45098d8e6bf9bcd1c824096a13e12dcb38\"" Sep 9 21:15:51.474102 containerd[1524]: time="2025-09-09T21:15:51.474076390Z" level=info msg="connecting to shim 64e594990d4157d435f74bf9c1435f45098d8e6bf9bcd1c824096a13e12dcb38" address="unix:///run/containerd/s/4b219d25671542ec089201d1d561d609b09ff802529b75ce2b1c2993574f64bc" protocol=ttrpc version=3 Sep 9 21:15:51.486095 kubelet[2650]: E0909 21:15:51.486070 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:51.496742 systemd[1]: Started cri-containerd-64e594990d4157d435f74bf9c1435f45098d8e6bf9bcd1c824096a13e12dcb38.scope - libcontainer container 64e594990d4157d435f74bf9c1435f45098d8e6bf9bcd1c824096a13e12dcb38. Sep 9 21:15:51.535769 containerd[1524]: time="2025-09-09T21:15:51.535732502Z" level=info msg="StartContainer for \"64e594990d4157d435f74bf9c1435f45098d8e6bf9bcd1c824096a13e12dcb38\" returns successfully" Sep 9 21:15:51.715269 kubelet[2650]: E0909 21:15:51.714939 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:51.715499 containerd[1524]: time="2025-09-09T21:15:51.715464249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-tvlh9,Uid:cca4563c-beab-418e-ae3d-77f098b6fdc1,Namespace:kube-system,Attempt:0,}" Sep 9 21:15:51.731582 containerd[1524]: time="2025-09-09T21:15:51.731516275Z" level=info msg="connecting to shim a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501" address="unix:///run/containerd/s/35236ff6a88ea96441fe80c35604b9b110693cc39307b30ae8ecd7882710b56e" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:15:51.765815 systemd[1]: Started cri-containerd-a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501.scope - libcontainer container a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501. Sep 9 21:15:51.794935 containerd[1524]: time="2025-09-09T21:15:51.794890608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-tvlh9,Uid:cca4563c-beab-418e-ae3d-77f098b6fdc1,Namespace:kube-system,Attempt:0,} returns sandbox id \"a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501\"" Sep 9 21:15:51.795479 kubelet[2650]: E0909 21:15:51.795452 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:52.317465 kubelet[2650]: E0909 21:15:52.317433 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:52.317580 kubelet[2650]: E0909 21:15:52.317498 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:53.386734 kubelet[2650]: E0909 21:15:53.386688 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:53.403020 kubelet[2650]: I0909 21:15:53.402958 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4n648" podStartSLOduration=3.402940915 podStartE2EDuration="3.402940915s" podCreationTimestamp="2025-09-09 21:15:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:15:52.332737373 +0000 UTC m=+7.118581469" watchObservedRunningTime="2025-09-09 21:15:53.402940915 +0000 UTC m=+8.188785011" Sep 9 21:15:54.320216 kubelet[2650]: E0909 21:15:54.320161 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:15:59.238219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999703290.mount: Deactivated successfully. Sep 9 21:16:00.504386 containerd[1524]: time="2025-09-09T21:16:00.504331743Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:16:00.507104 containerd[1524]: time="2025-09-09T21:16:00.506940837Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 9 21:16:00.508600 containerd[1524]: time="2025-09-09T21:16:00.508101559Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:16:00.509876 containerd[1524]: time="2025-09-09T21:16:00.509825061Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.056266081s" Sep 9 21:16:00.509876 containerd[1524]: time="2025-09-09T21:16:00.509873143Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 9 21:16:00.511353 kubelet[2650]: E0909 21:16:00.511300 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:00.523049 containerd[1524]: time="2025-09-09T21:16:00.523013456Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 9 21:16:00.531714 containerd[1524]: time="2025-09-09T21:16:00.531665648Z" level=info msg="CreateContainer within sandbox \"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 21:16:00.539215 containerd[1524]: time="2025-09-09T21:16:00.539178398Z" level=info msg="Container b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:16:00.542935 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1937493310.mount: Deactivated successfully. Sep 9 21:16:00.544811 containerd[1524]: time="2025-09-09T21:16:00.544766279Z" level=info msg="CreateContainer within sandbox \"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3\"" Sep 9 21:16:00.545327 containerd[1524]: time="2025-09-09T21:16:00.545304419Z" level=info msg="StartContainer for \"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3\"" Sep 9 21:16:00.546365 containerd[1524]: time="2025-09-09T21:16:00.546336536Z" level=info msg="connecting to shim b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3" address="unix:///run/containerd/s/8213ba28f66185ae5bde9491ac6c35b37551393119422e360bd89ea2e49bc957" protocol=ttrpc version=3 Sep 9 21:16:00.592736 systemd[1]: Started cri-containerd-b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3.scope - libcontainer container b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3. Sep 9 21:16:00.618852 containerd[1524]: time="2025-09-09T21:16:00.618818106Z" level=info msg="StartContainer for \"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3\" returns successfully" Sep 9 21:16:00.629674 systemd[1]: cri-containerd-b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3.scope: Deactivated successfully. Sep 9 21:16:00.659617 containerd[1524]: time="2025-09-09T21:16:00.659514331Z" level=info msg="received exit event container_id:\"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3\" id:\"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3\" pid:3075 exited_at:{seconds:1757452560 nanos:654295423}" Sep 9 21:16:00.660155 containerd[1524]: time="2025-09-09T21:16:00.659666577Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3\" id:\"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3\" pid:3075 exited_at:{seconds:1757452560 nanos:654295423}" Sep 9 21:16:00.692991 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3-rootfs.mount: Deactivated successfully. Sep 9 21:16:01.335209 kubelet[2650]: E0909 21:16:01.335035 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:01.336973 containerd[1524]: time="2025-09-09T21:16:01.336936795Z" level=info msg="CreateContainer within sandbox \"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 21:16:01.350731 containerd[1524]: time="2025-09-09T21:16:01.350697426Z" level=info msg="Container a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:16:01.356531 containerd[1524]: time="2025-09-09T21:16:01.356486824Z" level=info msg="CreateContainer within sandbox \"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b\"" Sep 9 21:16:01.357059 containerd[1524]: time="2025-09-09T21:16:01.356986481Z" level=info msg="StartContainer for \"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b\"" Sep 9 21:16:01.358036 containerd[1524]: time="2025-09-09T21:16:01.357982915Z" level=info msg="connecting to shim a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b" address="unix:///run/containerd/s/8213ba28f66185ae5bde9491ac6c35b37551393119422e360bd89ea2e49bc957" protocol=ttrpc version=3 Sep 9 21:16:01.379752 systemd[1]: Started cri-containerd-a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b.scope - libcontainer container a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b. Sep 9 21:16:01.405026 containerd[1524]: time="2025-09-09T21:16:01.404987682Z" level=info msg="StartContainer for \"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b\" returns successfully" Sep 9 21:16:01.417496 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 21:16:01.417806 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:16:01.418268 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:16:01.419644 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 21:16:01.420850 systemd[1]: cri-containerd-a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b.scope: Deactivated successfully. Sep 9 21:16:01.422003 containerd[1524]: time="2025-09-09T21:16:01.421970383Z" level=info msg="received exit event container_id:\"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b\" id:\"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b\" pid:3119 exited_at:{seconds:1757452561 nanos:421116274}" Sep 9 21:16:01.423027 containerd[1524]: time="2025-09-09T21:16:01.422994458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b\" id:\"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b\" pid:3119 exited_at:{seconds:1757452561 nanos:421116274}" Sep 9 21:16:01.443874 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 21:16:01.989129 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3672823272.mount: Deactivated successfully. Sep 9 21:16:02.340668 kubelet[2650]: E0909 21:16:02.340196 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:02.343421 containerd[1524]: time="2025-09-09T21:16:02.343361865Z" level=info msg="CreateContainer within sandbox \"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 21:16:02.385983 containerd[1524]: time="2025-09-09T21:16:02.385934408Z" level=info msg="Container 6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:16:02.393371 containerd[1524]: time="2025-09-09T21:16:02.393318288Z" level=info msg="CreateContainer within sandbox \"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b\"" Sep 9 21:16:02.393794 containerd[1524]: time="2025-09-09T21:16:02.393761942Z" level=info msg="StartContainer for \"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b\"" Sep 9 21:16:02.395255 containerd[1524]: time="2025-09-09T21:16:02.395033384Z" level=info msg="connecting to shim 6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b" address="unix:///run/containerd/s/8213ba28f66185ae5bde9491ac6c35b37551393119422e360bd89ea2e49bc957" protocol=ttrpc version=3 Sep 9 21:16:02.427781 systemd[1]: Started cri-containerd-6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b.scope - libcontainer container 6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b. Sep 9 21:16:02.480165 containerd[1524]: time="2025-09-09T21:16:02.480124508Z" level=info msg="StartContainer for \"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b\" returns successfully" Sep 9 21:16:02.480409 systemd[1]: cri-containerd-6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b.scope: Deactivated successfully. Sep 9 21:16:02.482625 systemd[1]: cri-containerd-6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b.scope: Consumed 33ms CPU time, 4.2M memory peak, 1.3M read from disk. Sep 9 21:16:02.492216 containerd[1524]: time="2025-09-09T21:16:02.492175260Z" level=info msg="received exit event container_id:\"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b\" id:\"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b\" pid:3175 exited_at:{seconds:1757452562 nanos:491991374}" Sep 9 21:16:02.492536 containerd[1524]: time="2025-09-09T21:16:02.492495230Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b\" id:\"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b\" pid:3175 exited_at:{seconds:1757452562 nanos:491991374}" Sep 9 21:16:02.701992 containerd[1524]: time="2025-09-09T21:16:02.701876472Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:16:02.702309 containerd[1524]: time="2025-09-09T21:16:02.702288446Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 9 21:16:02.703325 containerd[1524]: time="2025-09-09T21:16:02.703294598Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 9 21:16:02.705305 containerd[1524]: time="2025-09-09T21:16:02.705187660Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.181882274s" Sep 9 21:16:02.705305 containerd[1524]: time="2025-09-09T21:16:02.705222541Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 9 21:16:02.709019 containerd[1524]: time="2025-09-09T21:16:02.708902661Z" level=info msg="CreateContainer within sandbox \"a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 9 21:16:02.717095 containerd[1524]: time="2025-09-09T21:16:02.717060926Z" level=info msg="Container 8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:16:02.729502 containerd[1524]: time="2025-09-09T21:16:02.729458969Z" level=info msg="CreateContainer within sandbox \"a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\"" Sep 9 21:16:02.729989 containerd[1524]: time="2025-09-09T21:16:02.729883822Z" level=info msg="StartContainer for \"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\"" Sep 9 21:16:02.730712 containerd[1524]: time="2025-09-09T21:16:02.730676768Z" level=info msg="connecting to shim 8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d" address="unix:///run/containerd/s/35236ff6a88ea96441fe80c35604b9b110693cc39307b30ae8ecd7882710b56e" protocol=ttrpc version=3 Sep 9 21:16:02.753724 systemd[1]: Started cri-containerd-8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d.scope - libcontainer container 8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d. Sep 9 21:16:02.782150 containerd[1524]: time="2025-09-09T21:16:02.782115599Z" level=info msg="StartContainer for \"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\" returns successfully" Sep 9 21:16:03.344651 kubelet[2650]: E0909 21:16:03.344590 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:03.349283 kubelet[2650]: E0909 21:16:03.349130 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:03.351784 containerd[1524]: time="2025-09-09T21:16:03.351745827Z" level=info msg="CreateContainer within sandbox \"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 21:16:03.358769 kubelet[2650]: I0909 21:16:03.358480 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-tvlh9" podStartSLOduration=1.449319952 podStartE2EDuration="12.358464714s" podCreationTimestamp="2025-09-09 21:15:51 +0000 UTC" firstStartedPulling="2025-09-09 21:15:51.796862525 +0000 UTC m=+6.582706621" lastFinishedPulling="2025-09-09 21:16:02.706007287 +0000 UTC m=+17.491851383" observedRunningTime="2025-09-09 21:16:03.358350831 +0000 UTC m=+18.144195007" watchObservedRunningTime="2025-09-09 21:16:03.358464714 +0000 UTC m=+18.144308770" Sep 9 21:16:03.394553 containerd[1524]: time="2025-09-09T21:16:03.394502427Z" level=info msg="Container af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:16:03.404214 containerd[1524]: time="2025-09-09T21:16:03.404175886Z" level=info msg="CreateContainer within sandbox \"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20\"" Sep 9 21:16:03.406358 containerd[1524]: time="2025-09-09T21:16:03.406166828Z" level=info msg="StartContainer for \"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20\"" Sep 9 21:16:03.407940 containerd[1524]: time="2025-09-09T21:16:03.407848640Z" level=info msg="connecting to shim af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20" address="unix:///run/containerd/s/8213ba28f66185ae5bde9491ac6c35b37551393119422e360bd89ea2e49bc957" protocol=ttrpc version=3 Sep 9 21:16:03.436823 systemd[1]: Started cri-containerd-af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20.scope - libcontainer container af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20. Sep 9 21:16:03.495621 systemd[1]: cri-containerd-af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20.scope: Deactivated successfully. Sep 9 21:16:03.497088 containerd[1524]: time="2025-09-09T21:16:03.497053755Z" level=info msg="received exit event container_id:\"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20\" id:\"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20\" pid:3255 exited_at:{seconds:1757452563 nanos:496883190}" Sep 9 21:16:03.497545 containerd[1524]: time="2025-09-09T21:16:03.497152838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20\" id:\"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20\" pid:3255 exited_at:{seconds:1757452563 nanos:496883190}" Sep 9 21:16:03.497796 containerd[1524]: time="2025-09-09T21:16:03.497690335Z" level=info msg="StartContainer for \"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20\" returns successfully" Sep 9 21:16:03.541960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4253457860.mount: Deactivated successfully. Sep 9 21:16:03.543666 update_engine[1502]: I20250909 21:16:03.543603 1502 update_attempter.cc:509] Updating boot flags... Sep 9 21:16:04.355443 kubelet[2650]: E0909 21:16:04.355416 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:04.357316 kubelet[2650]: E0909 21:16:04.355789 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:04.358456 containerd[1524]: time="2025-09-09T21:16:04.358416229Z" level=info msg="CreateContainer within sandbox \"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 21:16:04.382064 containerd[1524]: time="2025-09-09T21:16:04.381676233Z" level=info msg="Container e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:16:04.388278 containerd[1524]: time="2025-09-09T21:16:04.388237506Z" level=info msg="CreateContainer within sandbox \"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\"" Sep 9 21:16:04.390688 containerd[1524]: time="2025-09-09T21:16:04.388961887Z" level=info msg="StartContainer for \"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\"" Sep 9 21:16:04.391077 containerd[1524]: time="2025-09-09T21:16:04.391041548Z" level=info msg="connecting to shim e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e" address="unix:///run/containerd/s/8213ba28f66185ae5bde9491ac6c35b37551393119422e360bd89ea2e49bc957" protocol=ttrpc version=3 Sep 9 21:16:04.409718 systemd[1]: Started cri-containerd-e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e.scope - libcontainer container e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e. Sep 9 21:16:04.438457 containerd[1524]: time="2025-09-09T21:16:04.438418061Z" level=info msg="StartContainer for \"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\" returns successfully" Sep 9 21:16:04.531501 containerd[1524]: time="2025-09-09T21:16:04.531214868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\" id:\"4b76795a21af7771d20da443c5a757da1842769088a3d99881c730a40a542a93\" pid:3336 exited_at:{seconds:1757452564 nanos:530905059}" Sep 9 21:16:04.580010 kubelet[2650]: I0909 21:16:04.579977 2650 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 9 21:16:04.612386 systemd[1]: Created slice kubepods-burstable-pod14fc596c_0219_4ab9_9c18_b8290411ea97.slice - libcontainer container kubepods-burstable-pod14fc596c_0219_4ab9_9c18_b8290411ea97.slice. Sep 9 21:16:04.623148 systemd[1]: Created slice kubepods-burstable-podaa6c8fb0_1707_456d_8d0a_07b0133a3236.slice - libcontainer container kubepods-burstable-podaa6c8fb0_1707_456d_8d0a_07b0133a3236.slice. Sep 9 21:16:04.700984 kubelet[2650]: I0909 21:16:04.700941 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14fc596c-0219-4ab9-9c18-b8290411ea97-config-volume\") pod \"coredns-7c65d6cfc9-rgrfl\" (UID: \"14fc596c-0219-4ab9-9c18-b8290411ea97\") " pod="kube-system/coredns-7c65d6cfc9-rgrfl" Sep 9 21:16:04.700984 kubelet[2650]: I0909 21:16:04.700984 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jfkd\" (UniqueName: \"kubernetes.io/projected/aa6c8fb0-1707-456d-8d0a-07b0133a3236-kube-api-access-8jfkd\") pod \"coredns-7c65d6cfc9-gxnrt\" (UID: \"aa6c8fb0-1707-456d-8d0a-07b0133a3236\") " pod="kube-system/coredns-7c65d6cfc9-gxnrt" Sep 9 21:16:04.701151 kubelet[2650]: I0909 21:16:04.701025 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa6c8fb0-1707-456d-8d0a-07b0133a3236-config-volume\") pod \"coredns-7c65d6cfc9-gxnrt\" (UID: \"aa6c8fb0-1707-456d-8d0a-07b0133a3236\") " pod="kube-system/coredns-7c65d6cfc9-gxnrt" Sep 9 21:16:04.701151 kubelet[2650]: I0909 21:16:04.701057 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cph6r\" (UniqueName: \"kubernetes.io/projected/14fc596c-0219-4ab9-9c18-b8290411ea97-kube-api-access-cph6r\") pod \"coredns-7c65d6cfc9-rgrfl\" (UID: \"14fc596c-0219-4ab9-9c18-b8290411ea97\") " pod="kube-system/coredns-7c65d6cfc9-rgrfl" Sep 9 21:16:04.915817 kubelet[2650]: E0909 21:16:04.915709 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:04.917732 containerd[1524]: time="2025-09-09T21:16:04.917697028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rgrfl,Uid:14fc596c-0219-4ab9-9c18-b8290411ea97,Namespace:kube-system,Attempt:0,}" Sep 9 21:16:04.927841 kubelet[2650]: E0909 21:16:04.927815 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:04.930632 containerd[1524]: time="2025-09-09T21:16:04.930190796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gxnrt,Uid:aa6c8fb0-1707-456d-8d0a-07b0133a3236,Namespace:kube-system,Attempt:0,}" Sep 9 21:16:05.363353 kubelet[2650]: E0909 21:16:05.363299 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:05.381965 kubelet[2650]: I0909 21:16:05.381911 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qgdb6" podStartSLOduration=6.312108117 podStartE2EDuration="15.381896499s" podCreationTimestamp="2025-09-09 21:15:50 +0000 UTC" firstStartedPulling="2025-09-09 21:15:51.452898462 +0000 UTC m=+6.238742518" lastFinishedPulling="2025-09-09 21:16:00.522686804 +0000 UTC m=+15.308530900" observedRunningTime="2025-09-09 21:16:05.38158973 +0000 UTC m=+20.167433866" watchObservedRunningTime="2025-09-09 21:16:05.381896499 +0000 UTC m=+20.167740595" Sep 9 21:16:06.365139 kubelet[2650]: E0909 21:16:06.364771 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:06.471851 systemd-networkd[1449]: cilium_host: Link UP Sep 9 21:16:06.472022 systemd-networkd[1449]: cilium_net: Link UP Sep 9 21:16:06.472224 systemd-networkd[1449]: cilium_host: Gained carrier Sep 9 21:16:06.472504 systemd-networkd[1449]: cilium_net: Gained carrier Sep 9 21:16:06.534716 systemd-networkd[1449]: cilium_net: Gained IPv6LL Sep 9 21:16:06.541022 systemd-networkd[1449]: cilium_vxlan: Link UP Sep 9 21:16:06.541027 systemd-networkd[1449]: cilium_vxlan: Gained carrier Sep 9 21:16:06.671779 systemd-networkd[1449]: cilium_host: Gained IPv6LL Sep 9 21:16:06.787620 kernel: NET: Registered PF_ALG protocol family Sep 9 21:16:07.336319 systemd-networkd[1449]: lxc_health: Link UP Sep 9 21:16:07.336529 systemd-networkd[1449]: lxc_health: Gained carrier Sep 9 21:16:07.367583 kubelet[2650]: E0909 21:16:07.367535 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:07.460649 systemd-networkd[1449]: lxcde7398c535b3: Link UP Sep 9 21:16:07.461827 kernel: eth0: renamed from tmp4a7df Sep 9 21:16:07.463185 systemd-networkd[1449]: lxcde7398c535b3: Gained carrier Sep 9 21:16:07.470132 systemd-networkd[1449]: lxc38e7b175d218: Link UP Sep 9 21:16:07.478585 kernel: eth0: renamed from tmp6cbd5 Sep 9 21:16:07.480356 systemd-networkd[1449]: lxc38e7b175d218: Gained carrier Sep 9 21:16:07.839774 systemd-networkd[1449]: cilium_vxlan: Gained IPv6LL Sep 9 21:16:08.351732 systemd-networkd[1449]: lxc_health: Gained IPv6LL Sep 9 21:16:08.369128 kubelet[2650]: E0909 21:16:08.368914 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:09.183760 systemd-networkd[1449]: lxcde7398c535b3: Gained IPv6LL Sep 9 21:16:09.370757 kubelet[2650]: E0909 21:16:09.370684 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:09.503783 systemd-networkd[1449]: lxc38e7b175d218: Gained IPv6LL Sep 9 21:16:10.985763 containerd[1524]: time="2025-09-09T21:16:10.985708868Z" level=info msg="connecting to shim 6cbd5db13c5fc46e4622d255ea2a49b3d2f32ae97185f7755f9eee9daa85a83e" address="unix:///run/containerd/s/91c3310157b74371d677891049eae8149755e02fa957e67f6ed30a71a9cd275b" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:16:10.987287 containerd[1524]: time="2025-09-09T21:16:10.987254422Z" level=info msg="connecting to shim 4a7df7cb79409c5a739d6a881611c9ebb4de670c67953e621a41a168651e691a" address="unix:///run/containerd/s/45dd069e42ddcc7270f0644e6a15169d200155d9fefc53027c193e18d68c2b27" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:16:11.010734 systemd[1]: Started cri-containerd-4a7df7cb79409c5a739d6a881611c9ebb4de670c67953e621a41a168651e691a.scope - libcontainer container 4a7df7cb79409c5a739d6a881611c9ebb4de670c67953e621a41a168651e691a. Sep 9 21:16:11.013892 systemd[1]: Started cri-containerd-6cbd5db13c5fc46e4622d255ea2a49b3d2f32ae97185f7755f9eee9daa85a83e.scope - libcontainer container 6cbd5db13c5fc46e4622d255ea2a49b3d2f32ae97185f7755f9eee9daa85a83e. Sep 9 21:16:11.025001 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:16:11.026372 systemd-resolved[1350]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 9 21:16:11.049442 containerd[1524]: time="2025-09-09T21:16:11.049400435Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-gxnrt,Uid:aa6c8fb0-1707-456d-8d0a-07b0133a3236,Namespace:kube-system,Attempt:0,} returns sandbox id \"6cbd5db13c5fc46e4622d255ea2a49b3d2f32ae97185f7755f9eee9daa85a83e\"" Sep 9 21:16:11.050189 kubelet[2650]: E0909 21:16:11.050164 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:11.053828 containerd[1524]: time="2025-09-09T21:16:11.053757127Z" level=info msg="CreateContainer within sandbox \"6cbd5db13c5fc46e4622d255ea2a49b3d2f32ae97185f7755f9eee9daa85a83e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 21:16:11.064214 containerd[1524]: time="2025-09-09T21:16:11.064181189Z" level=info msg="Container 51e2911bb3bbe1b89037f22ce69ab352a1dad52f3456251cac4fd0197921fb8a: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:16:11.072093 containerd[1524]: time="2025-09-09T21:16:11.072046476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-rgrfl,Uid:14fc596c-0219-4ab9-9c18-b8290411ea97,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a7df7cb79409c5a739d6a881611c9ebb4de670c67953e621a41a168651e691a\"" Sep 9 21:16:11.072341 containerd[1524]: time="2025-09-09T21:16:11.072317361Z" level=info msg="CreateContainer within sandbox \"6cbd5db13c5fc46e4622d255ea2a49b3d2f32ae97185f7755f9eee9daa85a83e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"51e2911bb3bbe1b89037f22ce69ab352a1dad52f3456251cac4fd0197921fb8a\"" Sep 9 21:16:11.072943 kubelet[2650]: E0909 21:16:11.072917 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:11.073051 containerd[1524]: time="2025-09-09T21:16:11.072913334Z" level=info msg="StartContainer for \"51e2911bb3bbe1b89037f22ce69ab352a1dad52f3456251cac4fd0197921fb8a\"" Sep 9 21:16:11.074667 containerd[1524]: time="2025-09-09T21:16:11.074632850Z" level=info msg="CreateContainer within sandbox \"4a7df7cb79409c5a739d6a881611c9ebb4de670c67953e621a41a168651e691a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 9 21:16:11.074908 containerd[1524]: time="2025-09-09T21:16:11.074841215Z" level=info msg="connecting to shim 51e2911bb3bbe1b89037f22ce69ab352a1dad52f3456251cac4fd0197921fb8a" address="unix:///run/containerd/s/91c3310157b74371d677891049eae8149755e02fa957e67f6ed30a71a9cd275b" protocol=ttrpc version=3 Sep 9 21:16:11.081387 containerd[1524]: time="2025-09-09T21:16:11.081355953Z" level=info msg="Container b2c1e88f7d0ba6690819a4a7fe8bdf1ab62cb74959940c031db358c67332cb16: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:16:11.089221 containerd[1524]: time="2025-09-09T21:16:11.089166839Z" level=info msg="CreateContainer within sandbox \"4a7df7cb79409c5a739d6a881611c9ebb4de670c67953e621a41a168651e691a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2c1e88f7d0ba6690819a4a7fe8bdf1ab62cb74959940c031db358c67332cb16\"" Sep 9 21:16:11.090196 containerd[1524]: time="2025-09-09T21:16:11.090145660Z" level=info msg="StartContainer for \"b2c1e88f7d0ba6690819a4a7fe8bdf1ab62cb74959940c031db358c67332cb16\"" Sep 9 21:16:11.092084 containerd[1524]: time="2025-09-09T21:16:11.092058340Z" level=info msg="connecting to shim b2c1e88f7d0ba6690819a4a7fe8bdf1ab62cb74959940c031db358c67332cb16" address="unix:///run/containerd/s/45dd069e42ddcc7270f0644e6a15169d200155d9fefc53027c193e18d68c2b27" protocol=ttrpc version=3 Sep 9 21:16:11.097721 systemd[1]: Started cri-containerd-51e2911bb3bbe1b89037f22ce69ab352a1dad52f3456251cac4fd0197921fb8a.scope - libcontainer container 51e2911bb3bbe1b89037f22ce69ab352a1dad52f3456251cac4fd0197921fb8a. Sep 9 21:16:11.119761 systemd[1]: Started cri-containerd-b2c1e88f7d0ba6690819a4a7fe8bdf1ab62cb74959940c031db358c67332cb16.scope - libcontainer container b2c1e88f7d0ba6690819a4a7fe8bdf1ab62cb74959940c031db358c67332cb16. Sep 9 21:16:11.159593 containerd[1524]: time="2025-09-09T21:16:11.159533853Z" level=info msg="StartContainer for \"51e2911bb3bbe1b89037f22ce69ab352a1dad52f3456251cac4fd0197921fb8a\" returns successfully" Sep 9 21:16:11.160335 containerd[1524]: time="2025-09-09T21:16:11.160223707Z" level=info msg="StartContainer for \"b2c1e88f7d0ba6690819a4a7fe8bdf1ab62cb74959940c031db358c67332cb16\" returns successfully" Sep 9 21:16:11.376531 kubelet[2650]: E0909 21:16:11.375824 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:11.379316 kubelet[2650]: E0909 21:16:11.379232 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:11.391320 kubelet[2650]: I0909 21:16:11.390852 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-gxnrt" podStartSLOduration=20.390838763 podStartE2EDuration="20.390838763s" podCreationTimestamp="2025-09-09 21:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:16:11.390661999 +0000 UTC m=+26.176506175" watchObservedRunningTime="2025-09-09 21:16:11.390838763 +0000 UTC m=+26.176682859" Sep 9 21:16:11.416599 kubelet[2650]: I0909 21:16:11.415992 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-rgrfl" podStartSLOduration=20.415977857 podStartE2EDuration="20.415977857s" podCreationTimestamp="2025-09-09 21:15:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:16:11.404128485 +0000 UTC m=+26.189972581" watchObservedRunningTime="2025-09-09 21:16:11.415977857 +0000 UTC m=+26.201821953" Sep 9 21:16:11.969360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747159255.mount: Deactivated successfully. Sep 9 21:16:11.989050 systemd[1]: Started sshd@7-10.0.0.61:22-10.0.0.1:46012.service - OpenSSH per-connection server daemon (10.0.0.1:46012). Sep 9 21:16:12.039842 sshd[3991]: Accepted publickey for core from 10.0.0.1 port 46012 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:12.040971 sshd-session[3991]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:12.044852 systemd-logind[1501]: New session 8 of user core. Sep 9 21:16:12.056717 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 9 21:16:12.176394 sshd[3994]: Connection closed by 10.0.0.1 port 46012 Sep 9 21:16:12.176696 sshd-session[3991]: pam_unix(sshd:session): session closed for user core Sep 9 21:16:12.180839 systemd[1]: sshd@7-10.0.0.61:22-10.0.0.1:46012.service: Deactivated successfully. Sep 9 21:16:12.182918 systemd[1]: session-8.scope: Deactivated successfully. Sep 9 21:16:12.183772 systemd-logind[1501]: Session 8 logged out. Waiting for processes to exit. Sep 9 21:16:12.185034 systemd-logind[1501]: Removed session 8. Sep 9 21:16:12.381005 kubelet[2650]: E0909 21:16:12.380968 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:12.381314 kubelet[2650]: E0909 21:16:12.381038 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:13.382726 kubelet[2650]: E0909 21:16:13.382628 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:13.382726 kubelet[2650]: E0909 21:16:13.382702 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:17.192839 systemd[1]: Started sshd@8-10.0.0.61:22-10.0.0.1:46016.service - OpenSSH per-connection server daemon (10.0.0.1:46016). Sep 9 21:16:17.253741 sshd[4010]: Accepted publickey for core from 10.0.0.1 port 46016 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:17.254777 sshd-session[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:17.258267 systemd-logind[1501]: New session 9 of user core. Sep 9 21:16:17.264703 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 9 21:16:17.378346 sshd[4013]: Connection closed by 10.0.0.1 port 46016 Sep 9 21:16:17.378133 sshd-session[4010]: pam_unix(sshd:session): session closed for user core Sep 9 21:16:17.381560 systemd[1]: sshd@8-10.0.0.61:22-10.0.0.1:46016.service: Deactivated successfully. Sep 9 21:16:17.383561 systemd[1]: session-9.scope: Deactivated successfully. Sep 9 21:16:17.384198 systemd-logind[1501]: Session 9 logged out. Waiting for processes to exit. Sep 9 21:16:17.385080 systemd-logind[1501]: Removed session 9. Sep 9 21:16:22.393713 systemd[1]: Started sshd@9-10.0.0.61:22-10.0.0.1:51994.service - OpenSSH per-connection server daemon (10.0.0.1:51994). Sep 9 21:16:22.442125 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 51994 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:22.443147 sshd-session[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:22.446834 systemd-logind[1501]: New session 10 of user core. Sep 9 21:16:22.461743 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 9 21:16:22.569360 sshd[4035]: Connection closed by 10.0.0.1 port 51994 Sep 9 21:16:22.569691 sshd-session[4032]: pam_unix(sshd:session): session closed for user core Sep 9 21:16:22.573031 systemd[1]: sshd@9-10.0.0.61:22-10.0.0.1:51994.service: Deactivated successfully. Sep 9 21:16:22.574743 systemd[1]: session-10.scope: Deactivated successfully. Sep 9 21:16:22.575500 systemd-logind[1501]: Session 10 logged out. Waiting for processes to exit. Sep 9 21:16:22.576587 systemd-logind[1501]: Removed session 10. Sep 9 21:16:27.591619 systemd[1]: Started sshd@10-10.0.0.61:22-10.0.0.1:52002.service - OpenSSH per-connection server daemon (10.0.0.1:52002). Sep 9 21:16:27.649243 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 52002 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:27.650344 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:27.653848 systemd-logind[1501]: New session 11 of user core. Sep 9 21:16:27.662721 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 9 21:16:27.769465 sshd[4052]: Connection closed by 10.0.0.1 port 52002 Sep 9 21:16:27.769802 sshd-session[4049]: pam_unix(sshd:session): session closed for user core Sep 9 21:16:27.784907 systemd[1]: sshd@10-10.0.0.61:22-10.0.0.1:52002.service: Deactivated successfully. Sep 9 21:16:27.786506 systemd[1]: session-11.scope: Deactivated successfully. Sep 9 21:16:27.787262 systemd-logind[1501]: Session 11 logged out. Waiting for processes to exit. Sep 9 21:16:27.789520 systemd[1]: Started sshd@11-10.0.0.61:22-10.0.0.1:52014.service - OpenSSH per-connection server daemon (10.0.0.1:52014). Sep 9 21:16:27.790227 systemd-logind[1501]: Removed session 11. Sep 9 21:16:27.846859 sshd[4066]: Accepted publickey for core from 10.0.0.1 port 52014 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:27.848115 sshd-session[4066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:27.852814 systemd-logind[1501]: New session 12 of user core. Sep 9 21:16:27.858738 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 9 21:16:28.001873 sshd[4069]: Connection closed by 10.0.0.1 port 52014 Sep 9 21:16:28.002678 sshd-session[4066]: pam_unix(sshd:session): session closed for user core Sep 9 21:16:28.014790 systemd[1]: sshd@11-10.0.0.61:22-10.0.0.1:52014.service: Deactivated successfully. Sep 9 21:16:28.018778 systemd[1]: session-12.scope: Deactivated successfully. Sep 9 21:16:28.019969 systemd-logind[1501]: Session 12 logged out. Waiting for processes to exit. Sep 9 21:16:28.022866 systemd[1]: Started sshd@12-10.0.0.61:22-10.0.0.1:52030.service - OpenSSH per-connection server daemon (10.0.0.1:52030). Sep 9 21:16:28.024780 systemd-logind[1501]: Removed session 12. Sep 9 21:16:28.082479 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 52030 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:28.083558 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:28.087640 systemd-logind[1501]: New session 13 of user core. Sep 9 21:16:28.099709 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 9 21:16:28.211230 sshd[4083]: Connection closed by 10.0.0.1 port 52030 Sep 9 21:16:28.211587 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Sep 9 21:16:28.215114 systemd-logind[1501]: Session 13 logged out. Waiting for processes to exit. Sep 9 21:16:28.215744 systemd[1]: sshd@12-10.0.0.61:22-10.0.0.1:52030.service: Deactivated successfully. Sep 9 21:16:28.219166 systemd[1]: session-13.scope: Deactivated successfully. Sep 9 21:16:28.220872 systemd-logind[1501]: Removed session 13. Sep 9 21:16:33.223357 systemd[1]: Started sshd@13-10.0.0.61:22-10.0.0.1:39562.service - OpenSSH per-connection server daemon (10.0.0.1:39562). Sep 9 21:16:33.269954 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 39562 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:33.271872 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:33.278034 systemd-logind[1501]: New session 14 of user core. Sep 9 21:16:33.287719 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 9 21:16:33.397889 sshd[4099]: Connection closed by 10.0.0.1 port 39562 Sep 9 21:16:33.398385 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Sep 9 21:16:33.402791 systemd[1]: sshd@13-10.0.0.61:22-10.0.0.1:39562.service: Deactivated successfully. Sep 9 21:16:33.405286 systemd[1]: session-14.scope: Deactivated successfully. Sep 9 21:16:33.407346 systemd-logind[1501]: Session 14 logged out. Waiting for processes to exit. Sep 9 21:16:33.410989 systemd-logind[1501]: Removed session 14. Sep 9 21:16:38.410774 systemd[1]: Started sshd@14-10.0.0.61:22-10.0.0.1:39570.service - OpenSSH per-connection server daemon (10.0.0.1:39570). Sep 9 21:16:38.470204 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 39570 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:38.471198 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:38.475630 systemd-logind[1501]: New session 15 of user core. Sep 9 21:16:38.482733 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 9 21:16:38.589837 sshd[4115]: Connection closed by 10.0.0.1 port 39570 Sep 9 21:16:38.590278 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Sep 9 21:16:38.605661 systemd[1]: sshd@14-10.0.0.61:22-10.0.0.1:39570.service: Deactivated successfully. Sep 9 21:16:38.607095 systemd[1]: session-15.scope: Deactivated successfully. Sep 9 21:16:38.608635 systemd-logind[1501]: Session 15 logged out. Waiting for processes to exit. Sep 9 21:16:38.609910 systemd[1]: Started sshd@15-10.0.0.61:22-10.0.0.1:39584.service - OpenSSH per-connection server daemon (10.0.0.1:39584). Sep 9 21:16:38.611287 systemd-logind[1501]: Removed session 15. Sep 9 21:16:38.661255 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 39584 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:38.662442 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:38.666811 systemd-logind[1501]: New session 16 of user core. Sep 9 21:16:38.675748 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 9 21:16:38.843605 sshd[4132]: Connection closed by 10.0.0.1 port 39584 Sep 9 21:16:38.843885 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Sep 9 21:16:38.857528 systemd[1]: sshd@15-10.0.0.61:22-10.0.0.1:39584.service: Deactivated successfully. Sep 9 21:16:38.859198 systemd[1]: session-16.scope: Deactivated successfully. Sep 9 21:16:38.859998 systemd-logind[1501]: Session 16 logged out. Waiting for processes to exit. Sep 9 21:16:38.862048 systemd[1]: Started sshd@16-10.0.0.61:22-10.0.0.1:39598.service - OpenSSH per-connection server daemon (10.0.0.1:39598). Sep 9 21:16:38.863482 systemd-logind[1501]: Removed session 16. Sep 9 21:16:38.920466 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 39598 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:38.921703 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:38.926709 systemd-logind[1501]: New session 17 of user core. Sep 9 21:16:38.934695 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 9 21:16:39.992149 sshd[4147]: Connection closed by 10.0.0.1 port 39598 Sep 9 21:16:39.993694 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Sep 9 21:16:40.003458 systemd[1]: sshd@16-10.0.0.61:22-10.0.0.1:39598.service: Deactivated successfully. Sep 9 21:16:40.009108 systemd[1]: session-17.scope: Deactivated successfully. Sep 9 21:16:40.012248 systemd-logind[1501]: Session 17 logged out. Waiting for processes to exit. Sep 9 21:16:40.016908 systemd[1]: Started sshd@17-10.0.0.61:22-10.0.0.1:35030.service - OpenSSH per-connection server daemon (10.0.0.1:35030). Sep 9 21:16:40.020027 systemd-logind[1501]: Removed session 17. Sep 9 21:16:40.073606 sshd[4167]: Accepted publickey for core from 10.0.0.1 port 35030 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:40.074999 sshd-session[4167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:40.078767 systemd-logind[1501]: New session 18 of user core. Sep 9 21:16:40.089696 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 9 21:16:40.303650 sshd[4170]: Connection closed by 10.0.0.1 port 35030 Sep 9 21:16:40.303829 sshd-session[4167]: pam_unix(sshd:session): session closed for user core Sep 9 21:16:40.313657 systemd[1]: sshd@17-10.0.0.61:22-10.0.0.1:35030.service: Deactivated successfully. Sep 9 21:16:40.316513 systemd[1]: session-18.scope: Deactivated successfully. Sep 9 21:16:40.319156 systemd-logind[1501]: Session 18 logged out. Waiting for processes to exit. Sep 9 21:16:40.321209 systemd[1]: Started sshd@18-10.0.0.61:22-10.0.0.1:35032.service - OpenSSH per-connection server daemon (10.0.0.1:35032). Sep 9 21:16:40.322966 systemd-logind[1501]: Removed session 18. Sep 9 21:16:40.384808 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 35032 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:40.386027 sshd-session[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:40.390091 systemd-logind[1501]: New session 19 of user core. Sep 9 21:16:40.400774 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 9 21:16:40.512728 sshd[4185]: Connection closed by 10.0.0.1 port 35032 Sep 9 21:16:40.513055 sshd-session[4182]: pam_unix(sshd:session): session closed for user core Sep 9 21:16:40.516407 systemd[1]: sshd@18-10.0.0.61:22-10.0.0.1:35032.service: Deactivated successfully. Sep 9 21:16:40.518115 systemd[1]: session-19.scope: Deactivated successfully. Sep 9 21:16:40.518817 systemd-logind[1501]: Session 19 logged out. Waiting for processes to exit. Sep 9 21:16:40.519828 systemd-logind[1501]: Removed session 19. Sep 9 21:16:45.535646 systemd[1]: Started sshd@19-10.0.0.61:22-10.0.0.1:35042.service - OpenSSH per-connection server daemon (10.0.0.1:35042). Sep 9 21:16:45.596224 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 35042 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:45.597297 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:45.601237 systemd-logind[1501]: New session 20 of user core. Sep 9 21:16:45.618745 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 9 21:16:45.721379 sshd[4208]: Connection closed by 10.0.0.1 port 35042 Sep 9 21:16:45.721959 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Sep 9 21:16:45.725154 systemd[1]: sshd@19-10.0.0.61:22-10.0.0.1:35042.service: Deactivated successfully. Sep 9 21:16:45.726760 systemd[1]: session-20.scope: Deactivated successfully. Sep 9 21:16:45.728733 systemd-logind[1501]: Session 20 logged out. Waiting for processes to exit. Sep 9 21:16:45.729621 systemd-logind[1501]: Removed session 20. Sep 9 21:16:50.736624 systemd[1]: Started sshd@20-10.0.0.61:22-10.0.0.1:37470.service - OpenSSH per-connection server daemon (10.0.0.1:37470). Sep 9 21:16:50.803075 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 37470 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:50.804087 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:50.807702 systemd-logind[1501]: New session 21 of user core. Sep 9 21:16:50.815708 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 9 21:16:50.924244 sshd[4225]: Connection closed by 10.0.0.1 port 37470 Sep 9 21:16:50.924314 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Sep 9 21:16:50.927664 systemd[1]: sshd@20-10.0.0.61:22-10.0.0.1:37470.service: Deactivated successfully. Sep 9 21:16:50.930399 systemd[1]: session-21.scope: Deactivated successfully. Sep 9 21:16:50.931100 systemd-logind[1501]: Session 21 logged out. Waiting for processes to exit. Sep 9 21:16:50.932173 systemd-logind[1501]: Removed session 21. Sep 9 21:16:55.941627 systemd[1]: Started sshd@21-10.0.0.61:22-10.0.0.1:37486.service - OpenSSH per-connection server daemon (10.0.0.1:37486). Sep 9 21:16:55.986862 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 37486 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:55.988113 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:55.992382 systemd-logind[1501]: New session 22 of user core. Sep 9 21:16:55.999712 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 9 21:16:56.105689 sshd[4243]: Connection closed by 10.0.0.1 port 37486 Sep 9 21:16:56.106150 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Sep 9 21:16:56.115893 systemd[1]: sshd@21-10.0.0.61:22-10.0.0.1:37486.service: Deactivated successfully. Sep 9 21:16:56.117454 systemd[1]: session-22.scope: Deactivated successfully. Sep 9 21:16:56.118115 systemd-logind[1501]: Session 22 logged out. Waiting for processes to exit. Sep 9 21:16:56.120498 systemd[1]: Started sshd@22-10.0.0.61:22-10.0.0.1:37496.service - OpenSSH per-connection server daemon (10.0.0.1:37496). Sep 9 21:16:56.121367 systemd-logind[1501]: Removed session 22. Sep 9 21:16:56.175925 sshd[4256]: Accepted publickey for core from 10.0.0.1 port 37496 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:16:56.176942 sshd-session[4256]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:16:56.181346 systemd-logind[1501]: New session 23 of user core. Sep 9 21:16:56.188697 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 9 21:16:58.294183 kubelet[2650]: E0909 21:16:58.294079 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:16:58.579996 containerd[1524]: time="2025-09-09T21:16:58.577821734Z" level=info msg="StopContainer for \"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\" with timeout 30 (s)" Sep 9 21:16:58.587879 containerd[1524]: time="2025-09-09T21:16:58.586882993Z" level=info msg="Stop container \"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\" with signal terminated" Sep 9 21:16:58.615358 systemd[1]: cri-containerd-8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d.scope: Deactivated successfully. Sep 9 21:16:58.617413 containerd[1524]: time="2025-09-09T21:16:58.617376017Z" level=info msg="received exit event container_id:\"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\" id:\"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\" pid:3223 exited_at:{seconds:1757452618 nanos:616702455}" Sep 9 21:16:58.617544 containerd[1524]: time="2025-09-09T21:16:58.617510097Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\" id:\"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\" pid:3223 exited_at:{seconds:1757452618 nanos:616702455}" Sep 9 21:16:58.636457 containerd[1524]: time="2025-09-09T21:16:58.636419697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\" id:\"24b3127b5a9b13f3c5119bac47625aacf5fd31e0463c3e63f9df5dcd0c4abaa9\" pid:4287 exited_at:{seconds:1757452618 nanos:636136696}" Sep 9 21:16:58.640362 containerd[1524]: time="2025-09-09T21:16:58.640321625Z" level=info msg="StopContainer for \"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\" with timeout 2 (s)" Sep 9 21:16:58.640776 containerd[1524]: time="2025-09-09T21:16:58.640749946Z" level=info msg="Stop container \"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\" with signal terminated" Sep 9 21:16:58.642258 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d-rootfs.mount: Deactivated successfully. Sep 9 21:16:58.648841 systemd-networkd[1449]: lxc_health: Link DOWN Sep 9 21:16:58.649149 systemd-networkd[1449]: lxc_health: Lost carrier Sep 9 21:16:58.655637 containerd[1524]: time="2025-09-09T21:16:58.655587417Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 21:16:58.657892 containerd[1524]: time="2025-09-09T21:16:58.657824942Z" level=info msg="StopContainer for \"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\" returns successfully" Sep 9 21:16:58.660631 containerd[1524]: time="2025-09-09T21:16:58.660223547Z" level=info msg="StopPodSandbox for \"a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501\"" Sep 9 21:16:58.666980 systemd[1]: cri-containerd-e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e.scope: Deactivated successfully. Sep 9 21:16:58.667281 systemd[1]: cri-containerd-e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e.scope: Consumed 6.086s CPU time, 125.1M memory peak, 144K read from disk, 14.3M written to disk. Sep 9 21:16:58.668877 containerd[1524]: time="2025-09-09T21:16:58.668842605Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\" id:\"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\" pid:3307 exited_at:{seconds:1757452618 nanos:667923843}" Sep 9 21:16:58.668877 containerd[1524]: time="2025-09-09T21:16:58.668857725Z" level=info msg="received exit event container_id:\"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\" id:\"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\" pid:3307 exited_at:{seconds:1757452618 nanos:667923843}" Sep 9 21:16:58.670300 containerd[1524]: time="2025-09-09T21:16:58.670243288Z" level=info msg="Container to stop \"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:16:58.677144 systemd[1]: cri-containerd-a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501.scope: Deactivated successfully. Sep 9 21:16:58.685604 containerd[1524]: time="2025-09-09T21:16:58.684853958Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501\" id:\"a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501\" pid:2929 exit_status:137 exited_at:{seconds:1757452618 nanos:683891756}" Sep 9 21:16:58.690322 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e-rootfs.mount: Deactivated successfully. Sep 9 21:16:58.710146 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501-rootfs.mount: Deactivated successfully. Sep 9 21:16:58.716502 containerd[1524]: time="2025-09-09T21:16:58.716448105Z" level=info msg="StopContainer for \"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\" returns successfully" Sep 9 21:16:58.717113 containerd[1524]: time="2025-09-09T21:16:58.717086106Z" level=info msg="StopPodSandbox for \"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\"" Sep 9 21:16:58.717189 containerd[1524]: time="2025-09-09T21:16:58.717143066Z" level=info msg="Container to stop \"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:16:58.717189 containerd[1524]: time="2025-09-09T21:16:58.717154546Z" level=info msg="Container to stop \"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:16:58.717189 containerd[1524]: time="2025-09-09T21:16:58.717163386Z" level=info msg="Container to stop \"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:16:58.717189 containerd[1524]: time="2025-09-09T21:16:58.717171466Z" level=info msg="Container to stop \"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:16:58.717189 containerd[1524]: time="2025-09-09T21:16:58.717179746Z" level=info msg="Container to stop \"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 9 21:16:58.723207 containerd[1524]: time="2025-09-09T21:16:58.723136799Z" level=info msg="shim disconnected" id=a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501 namespace=k8s.io Sep 9 21:16:58.723207 containerd[1524]: time="2025-09-09T21:16:58.723173039Z" level=warning msg="cleaning up after shim disconnected" id=a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501 namespace=k8s.io Sep 9 21:16:58.723207 containerd[1524]: time="2025-09-09T21:16:58.723205279Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 21:16:58.723708 systemd[1]: cri-containerd-fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636.scope: Deactivated successfully. Sep 9 21:16:58.742415 containerd[1524]: time="2025-09-09T21:16:58.742368039Z" level=error msg="Failed to handle event container_id:\"a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501\" id:\"a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501\" pid:2929 exit_status:137 exited_at:{seconds:1757452618 nanos:683891756} for a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501" error="failed to handle container TaskExit event: failed to stop sandbox: failed to delete task: ttrpc: closed" Sep 9 21:16:58.743199 containerd[1524]: time="2025-09-09T21:16:58.743161201Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\" id:\"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\" pid:2800 exit_status:137 exited_at:{seconds:1757452618 nanos:725767444}" Sep 9 21:16:58.744189 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501-shm.mount: Deactivated successfully. Sep 9 21:16:58.745162 containerd[1524]: time="2025-09-09T21:16:58.745135165Z" level=info msg="TearDown network for sandbox \"a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501\" successfully" Sep 9 21:16:58.745207 containerd[1524]: time="2025-09-09T21:16:58.745163405Z" level=info msg="StopPodSandbox for \"a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501\" returns successfully" Sep 9 21:16:58.748163 containerd[1524]: time="2025-09-09T21:16:58.748127811Z" level=info msg="received exit event sandbox_id:\"a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501\" exit_status:137 exited_at:{seconds:1757452618 nanos:683891756}" Sep 9 21:16:58.749306 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636-rootfs.mount: Deactivated successfully. Sep 9 21:16:58.757402 containerd[1524]: time="2025-09-09T21:16:58.756794749Z" level=info msg="received exit event sandbox_id:\"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\" exit_status:137 exited_at:{seconds:1757452618 nanos:725767444}" Sep 9 21:16:58.757402 containerd[1524]: time="2025-09-09T21:16:58.757284710Z" level=info msg="TearDown network for sandbox \"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\" successfully" Sep 9 21:16:58.757402 containerd[1524]: time="2025-09-09T21:16:58.757307831Z" level=info msg="StopPodSandbox for \"fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636\" returns successfully" Sep 9 21:16:58.760826 containerd[1524]: time="2025-09-09T21:16:58.760786718Z" level=info msg="shim disconnected" id=fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636 namespace=k8s.io Sep 9 21:16:58.760903 containerd[1524]: time="2025-09-09T21:16:58.760819438Z" level=warning msg="cleaning up after shim disconnected" id=fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636 namespace=k8s.io Sep 9 21:16:58.760903 containerd[1524]: time="2025-09-09T21:16:58.760848038Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 9 21:16:58.923325 kubelet[2650]: I0909 21:16:58.923120 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-cni-path\") pod \"1252fff5-664a-44d8-975a-0410271d86a6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " Sep 9 21:16:58.923325 kubelet[2650]: I0909 21:16:58.923162 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-cilium-cgroup\") pod \"1252fff5-664a-44d8-975a-0410271d86a6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " Sep 9 21:16:58.923325 kubelet[2650]: I0909 21:16:58.923179 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-lib-modules\") pod \"1252fff5-664a-44d8-975a-0410271d86a6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " Sep 9 21:16:58.923325 kubelet[2650]: I0909 21:16:58.923199 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1252fff5-664a-44d8-975a-0410271d86a6-clustermesh-secrets\") pod \"1252fff5-664a-44d8-975a-0410271d86a6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " Sep 9 21:16:58.923325 kubelet[2650]: I0909 21:16:58.923214 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-cilium-run\") pod \"1252fff5-664a-44d8-975a-0410271d86a6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " Sep 9 21:16:58.923325 kubelet[2650]: I0909 21:16:58.923231 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1252fff5-664a-44d8-975a-0410271d86a6-cilium-config-path\") pod \"1252fff5-664a-44d8-975a-0410271d86a6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " Sep 9 21:16:58.924134 kubelet[2650]: I0909 21:16:58.923253 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1252fff5-664a-44d8-975a-0410271d86a6-hubble-tls\") pod \"1252fff5-664a-44d8-975a-0410271d86a6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " Sep 9 21:16:58.924134 kubelet[2650]: I0909 21:16:58.923270 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6krgs\" (UniqueName: \"kubernetes.io/projected/1252fff5-664a-44d8-975a-0410271d86a6-kube-api-access-6krgs\") pod \"1252fff5-664a-44d8-975a-0410271d86a6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " Sep 9 21:16:58.924134 kubelet[2650]: I0909 21:16:58.923286 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-xtables-lock\") pod \"1252fff5-664a-44d8-975a-0410271d86a6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " Sep 9 21:16:58.924134 kubelet[2650]: I0909 21:16:58.923299 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-etc-cni-netd\") pod \"1252fff5-664a-44d8-975a-0410271d86a6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " Sep 9 21:16:58.924134 kubelet[2650]: I0909 21:16:58.923323 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-host-proc-sys-net\") pod \"1252fff5-664a-44d8-975a-0410271d86a6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " Sep 9 21:16:58.924134 kubelet[2650]: I0909 21:16:58.923340 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-host-proc-sys-kernel\") pod \"1252fff5-664a-44d8-975a-0410271d86a6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " Sep 9 21:16:58.924291 kubelet[2650]: I0909 21:16:58.923357 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-bpf-maps\") pod \"1252fff5-664a-44d8-975a-0410271d86a6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " Sep 9 21:16:58.924291 kubelet[2650]: I0909 21:16:58.923372 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-hostproc\") pod \"1252fff5-664a-44d8-975a-0410271d86a6\" (UID: \"1252fff5-664a-44d8-975a-0410271d86a6\") " Sep 9 21:16:58.924291 kubelet[2650]: I0909 21:16:58.923389 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77qbh\" (UniqueName: \"kubernetes.io/projected/cca4563c-beab-418e-ae3d-77f098b6fdc1-kube-api-access-77qbh\") pod \"cca4563c-beab-418e-ae3d-77f098b6fdc1\" (UID: \"cca4563c-beab-418e-ae3d-77f098b6fdc1\") " Sep 9 21:16:58.924291 kubelet[2650]: I0909 21:16:58.923405 2650 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cca4563c-beab-418e-ae3d-77f098b6fdc1-cilium-config-path\") pod \"cca4563c-beab-418e-ae3d-77f098b6fdc1\" (UID: \"cca4563c-beab-418e-ae3d-77f098b6fdc1\") " Sep 9 21:16:58.927657 kubelet[2650]: I0909 21:16:58.927609 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-cni-path" (OuterVolumeSpecName: "cni-path") pod "1252fff5-664a-44d8-975a-0410271d86a6" (UID: "1252fff5-664a-44d8-975a-0410271d86a6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:16:58.927895 kubelet[2650]: I0909 21:16:58.927849 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1252fff5-664a-44d8-975a-0410271d86a6" (UID: "1252fff5-664a-44d8-975a-0410271d86a6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:16:58.928050 kubelet[2650]: I0909 21:16:58.927961 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1252fff5-664a-44d8-975a-0410271d86a6" (UID: "1252fff5-664a-44d8-975a-0410271d86a6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:16:58.928050 kubelet[2650]: I0909 21:16:58.928023 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cca4563c-beab-418e-ae3d-77f098b6fdc1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cca4563c-beab-418e-ae3d-77f098b6fdc1" (UID: "cca4563c-beab-418e-ae3d-77f098b6fdc1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 21:16:58.928113 kubelet[2650]: I0909 21:16:58.928065 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1252fff5-664a-44d8-975a-0410271d86a6" (UID: "1252fff5-664a-44d8-975a-0410271d86a6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:16:58.928113 kubelet[2650]: I0909 21:16:58.928080 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-hostproc" (OuterVolumeSpecName: "hostproc") pod "1252fff5-664a-44d8-975a-0410271d86a6" (UID: "1252fff5-664a-44d8-975a-0410271d86a6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:16:58.928113 kubelet[2650]: I0909 21:16:58.928138 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1252fff5-664a-44d8-975a-0410271d86a6" (UID: "1252fff5-664a-44d8-975a-0410271d86a6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:16:58.928113 kubelet[2650]: I0909 21:16:58.928173 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1252fff5-664a-44d8-975a-0410271d86a6" (UID: "1252fff5-664a-44d8-975a-0410271d86a6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:16:58.928113 kubelet[2650]: I0909 21:16:58.928190 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1252fff5-664a-44d8-975a-0410271d86a6" (UID: "1252fff5-664a-44d8-975a-0410271d86a6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:16:58.928556 kubelet[2650]: I0909 21:16:58.928497 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1252fff5-664a-44d8-975a-0410271d86a6" (UID: "1252fff5-664a-44d8-975a-0410271d86a6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:16:58.930407 kubelet[2650]: I0909 21:16:58.930377 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1252fff5-664a-44d8-975a-0410271d86a6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1252fff5-664a-44d8-975a-0410271d86a6" (UID: "1252fff5-664a-44d8-975a-0410271d86a6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 9 21:16:58.930594 kubelet[2650]: I0909 21:16:58.930561 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1252fff5-664a-44d8-975a-0410271d86a6" (UID: "1252fff5-664a-44d8-975a-0410271d86a6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 9 21:16:58.930820 kubelet[2650]: I0909 21:16:58.930781 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1252fff5-664a-44d8-975a-0410271d86a6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1252fff5-664a-44d8-975a-0410271d86a6" (UID: "1252fff5-664a-44d8-975a-0410271d86a6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 9 21:16:58.931285 kubelet[2650]: I0909 21:16:58.931245 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1252fff5-664a-44d8-975a-0410271d86a6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1252fff5-664a-44d8-975a-0410271d86a6" (UID: "1252fff5-664a-44d8-975a-0410271d86a6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 21:16:58.932185 kubelet[2650]: I0909 21:16:58.932155 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cca4563c-beab-418e-ae3d-77f098b6fdc1-kube-api-access-77qbh" (OuterVolumeSpecName: "kube-api-access-77qbh") pod "cca4563c-beab-418e-ae3d-77f098b6fdc1" (UID: "cca4563c-beab-418e-ae3d-77f098b6fdc1"). InnerVolumeSpecName "kube-api-access-77qbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 21:16:58.933337 kubelet[2650]: I0909 21:16:58.933312 2650 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1252fff5-664a-44d8-975a-0410271d86a6-kube-api-access-6krgs" (OuterVolumeSpecName: "kube-api-access-6krgs") pod "1252fff5-664a-44d8-975a-0410271d86a6" (UID: "1252fff5-664a-44d8-975a-0410271d86a6"). InnerVolumeSpecName "kube-api-access-6krgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 9 21:16:59.023701 kubelet[2650]: I0909 21:16:59.023632 2650 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.023701 kubelet[2650]: I0909 21:16:59.023685 2650 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.023701 kubelet[2650]: I0909 21:16:59.023704 2650 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77qbh\" (UniqueName: \"kubernetes.io/projected/cca4563c-beab-418e-ae3d-77f098b6fdc1-kube-api-access-77qbh\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.023876 kubelet[2650]: I0909 21:16:59.023721 2650 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cca4563c-beab-418e-ae3d-77f098b6fdc1-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.023876 kubelet[2650]: I0909 21:16:59.023739 2650 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.023876 kubelet[2650]: I0909 21:16:59.023748 2650 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.023876 kubelet[2650]: I0909 21:16:59.023756 2650 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.023876 kubelet[2650]: I0909 21:16:59.023764 2650 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1252fff5-664a-44d8-975a-0410271d86a6-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.023876 kubelet[2650]: I0909 21:16:59.023772 2650 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1252fff5-664a-44d8-975a-0410271d86a6-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.023876 kubelet[2650]: I0909 21:16:59.023780 2650 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.023876 kubelet[2650]: I0909 21:16:59.023787 2650 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1252fff5-664a-44d8-975a-0410271d86a6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.024027 kubelet[2650]: I0909 21:16:59.023795 2650 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6krgs\" (UniqueName: \"kubernetes.io/projected/1252fff5-664a-44d8-975a-0410271d86a6-kube-api-access-6krgs\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.024027 kubelet[2650]: I0909 21:16:59.023802 2650 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.024027 kubelet[2650]: I0909 21:16:59.023810 2650 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.024027 kubelet[2650]: I0909 21:16:59.023818 2650 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.024027 kubelet[2650]: I0909 21:16:59.023826 2650 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1252fff5-664a-44d8-975a-0410271d86a6-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 9 21:16:59.302719 systemd[1]: Removed slice kubepods-burstable-pod1252fff5_664a_44d8_975a_0410271d86a6.slice - libcontainer container kubepods-burstable-pod1252fff5_664a_44d8_975a_0410271d86a6.slice. Sep 9 21:16:59.302828 systemd[1]: kubepods-burstable-pod1252fff5_664a_44d8_975a_0410271d86a6.slice: Consumed 6.181s CPU time, 125.4M memory peak, 1.5M read from disk, 14.4M written to disk. Sep 9 21:16:59.304156 systemd[1]: Removed slice kubepods-besteffort-podcca4563c_beab_418e_ae3d_77f098b6fdc1.slice - libcontainer container kubepods-besteffort-podcca4563c_beab_418e_ae3d_77f098b6fdc1.slice. Sep 9 21:16:59.479499 kubelet[2650]: I0909 21:16:59.479359 2650 scope.go:117] "RemoveContainer" containerID="e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e" Sep 9 21:16:59.481906 containerd[1524]: time="2025-09-09T21:16:59.481872016Z" level=info msg="RemoveContainer for \"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\"" Sep 9 21:16:59.494578 containerd[1524]: time="2025-09-09T21:16:59.494487204Z" level=info msg="RemoveContainer for \"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\" returns successfully" Sep 9 21:16:59.495512 kubelet[2650]: I0909 21:16:59.495461 2650 scope.go:117] "RemoveContainer" containerID="af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20" Sep 9 21:16:59.500844 containerd[1524]: time="2025-09-09T21:16:59.500811138Z" level=info msg="RemoveContainer for \"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20\"" Sep 9 21:16:59.505260 containerd[1524]: time="2025-09-09T21:16:59.505022948Z" level=info msg="RemoveContainer for \"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20\" returns successfully" Sep 9 21:16:59.505392 kubelet[2650]: I0909 21:16:59.505365 2650 scope.go:117] "RemoveContainer" containerID="6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b" Sep 9 21:16:59.507631 containerd[1524]: time="2025-09-09T21:16:59.507604793Z" level=info msg="RemoveContainer for \"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b\"" Sep 9 21:16:59.514051 containerd[1524]: time="2025-09-09T21:16:59.514013368Z" level=info msg="RemoveContainer for \"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b\" returns successfully" Sep 9 21:16:59.514210 kubelet[2650]: I0909 21:16:59.514185 2650 scope.go:117] "RemoveContainer" containerID="a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b" Sep 9 21:16:59.515744 containerd[1524]: time="2025-09-09T21:16:59.515718451Z" level=info msg="RemoveContainer for \"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b\"" Sep 9 21:16:59.518377 containerd[1524]: time="2025-09-09T21:16:59.518340177Z" level=info msg="RemoveContainer for \"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b\" returns successfully" Sep 9 21:16:59.518538 kubelet[2650]: I0909 21:16:59.518518 2650 scope.go:117] "RemoveContainer" containerID="b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3" Sep 9 21:16:59.519928 containerd[1524]: time="2025-09-09T21:16:59.519905861Z" level=info msg="RemoveContainer for \"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3\"" Sep 9 21:16:59.522568 containerd[1524]: time="2025-09-09T21:16:59.522528987Z" level=info msg="RemoveContainer for \"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3\" returns successfully" Sep 9 21:16:59.522759 kubelet[2650]: I0909 21:16:59.522735 2650 scope.go:117] "RemoveContainer" containerID="e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e" Sep 9 21:16:59.522971 containerd[1524]: time="2025-09-09T21:16:59.522940508Z" level=error msg="ContainerStatus for \"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\": not found" Sep 9 21:16:59.525234 kubelet[2650]: E0909 21:16:59.525190 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\": not found" containerID="e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e" Sep 9 21:16:59.525331 kubelet[2650]: I0909 21:16:59.525250 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e"} err="failed to get container status \"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\": rpc error: code = NotFound desc = an error occurred when try to find container \"e2f0c5f66ff6653ef92f697c19e1a9b02225e4e8ee5a7e9bdf55b876789c3f4e\": not found" Sep 9 21:16:59.525368 kubelet[2650]: I0909 21:16:59.525334 2650 scope.go:117] "RemoveContainer" containerID="af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20" Sep 9 21:16:59.525626 containerd[1524]: time="2025-09-09T21:16:59.525591313Z" level=error msg="ContainerStatus for \"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20\": not found" Sep 9 21:16:59.525772 kubelet[2650]: E0909 21:16:59.525745 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20\": not found" containerID="af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20" Sep 9 21:16:59.525803 kubelet[2650]: I0909 21:16:59.525774 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20"} err="failed to get container status \"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20\": rpc error: code = NotFound desc = an error occurred when try to find container \"af672ff79808af4bdd0c396cd7276dd6499491488b527f72beb7607d4f7a8a20\": not found" Sep 9 21:16:59.525803 kubelet[2650]: I0909 21:16:59.525793 2650 scope.go:117] "RemoveContainer" containerID="6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b" Sep 9 21:16:59.526075 containerd[1524]: time="2025-09-09T21:16:59.526045755Z" level=error msg="ContainerStatus for \"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b\": not found" Sep 9 21:16:59.526181 kubelet[2650]: E0909 21:16:59.526159 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b\": not found" containerID="6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b" Sep 9 21:16:59.526217 kubelet[2650]: I0909 21:16:59.526187 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b"} err="failed to get container status \"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b\": rpc error: code = NotFound desc = an error occurred when try to find container \"6a92f372b4e5bbde97561ee355e2b0bd99ce80555d1c6083e7a63f5e4bd7d94b\": not found" Sep 9 21:16:59.526217 kubelet[2650]: I0909 21:16:59.526203 2650 scope.go:117] "RemoveContainer" containerID="a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b" Sep 9 21:16:59.526390 containerd[1524]: time="2025-09-09T21:16:59.526359395Z" level=error msg="ContainerStatus for \"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b\": not found" Sep 9 21:16:59.526515 kubelet[2650]: E0909 21:16:59.526490 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b\": not found" containerID="a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b" Sep 9 21:16:59.526548 kubelet[2650]: I0909 21:16:59.526518 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b"} err="failed to get container status \"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b\": rpc error: code = NotFound desc = an error occurred when try to find container \"a835887dfee429a522f2cef368b2e13c8582225e85967e992c0403399e4d1a2b\": not found" Sep 9 21:16:59.526548 kubelet[2650]: I0909 21:16:59.526534 2650 scope.go:117] "RemoveContainer" containerID="b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3" Sep 9 21:16:59.526761 containerd[1524]: time="2025-09-09T21:16:59.526731116Z" level=error msg="ContainerStatus for \"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3\": not found" Sep 9 21:16:59.526875 kubelet[2650]: E0909 21:16:59.526854 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3\": not found" containerID="b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3" Sep 9 21:16:59.526875 kubelet[2650]: I0909 21:16:59.526879 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3"} err="failed to get container status \"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4d8ce85ddc82e8565d67aaf9653049cb6bd06f788c70dac61dc0e6c85902fa3\": not found" Sep 9 21:16:59.526935 kubelet[2650]: I0909 21:16:59.526894 2650 scope.go:117] "RemoveContainer" containerID="8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d" Sep 9 21:16:59.528787 containerd[1524]: time="2025-09-09T21:16:59.528761361Z" level=info msg="RemoveContainer for \"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\"" Sep 9 21:16:59.531357 containerd[1524]: time="2025-09-09T21:16:59.531317166Z" level=info msg="RemoveContainer for \"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\" returns successfully" Sep 9 21:16:59.531494 kubelet[2650]: I0909 21:16:59.531467 2650 scope.go:117] "RemoveContainer" containerID="8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d" Sep 9 21:16:59.531716 containerd[1524]: time="2025-09-09T21:16:59.531682887Z" level=error msg="ContainerStatus for \"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\": not found" Sep 9 21:16:59.531821 kubelet[2650]: E0909 21:16:59.531801 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\": not found" containerID="8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d" Sep 9 21:16:59.531855 kubelet[2650]: I0909 21:16:59.531826 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d"} err="failed to get container status \"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\": rpc error: code = NotFound desc = an error occurred when try to find container \"8841a4c030e577f128f0a593993332cffc476fbe63af72cf1517c9c73a516d4d\": not found" Sep 9 21:16:59.639917 systemd[1]: var-lib-kubelet-pods-cca4563c\x2dbeab\x2d418e\x2dae3d\x2d77f098b6fdc1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d77qbh.mount: Deactivated successfully. Sep 9 21:16:59.640013 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fecd44fbeeafdd03dd2d7cea167880e27e6b97e6a0188928f5b04b6619853636-shm.mount: Deactivated successfully. Sep 9 21:16:59.640064 systemd[1]: var-lib-kubelet-pods-1252fff5\x2d664a\x2d44d8\x2d975a\x2d0410271d86a6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6krgs.mount: Deactivated successfully. Sep 9 21:16:59.640118 systemd[1]: var-lib-kubelet-pods-1252fff5\x2d664a\x2d44d8\x2d975a\x2d0410271d86a6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 9 21:16:59.640164 systemd[1]: var-lib-kubelet-pods-1252fff5\x2d664a\x2d44d8\x2d975a\x2d0410271d86a6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 9 21:17:00.068506 containerd[1524]: time="2025-09-09T21:17:00.068444134Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501\" id:\"a52bda084e44cb61601a60323666caf86f22f2a38e057faf35e50d60987bf501\" pid:2929 exit_status:137 exited_at:{seconds:1757452618 nanos:683891756}" Sep 9 21:17:00.345516 kubelet[2650]: E0909 21:17:00.345420 2650 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 21:17:00.539887 sshd[4259]: Connection closed by 10.0.0.1 port 37496 Sep 9 21:17:00.540403 sshd-session[4256]: pam_unix(sshd:session): session closed for user core Sep 9 21:17:00.551686 systemd[1]: sshd@22-10.0.0.61:22-10.0.0.1:37496.service: Deactivated successfully. Sep 9 21:17:00.553507 systemd[1]: session-23.scope: Deactivated successfully. Sep 9 21:17:00.553815 systemd[1]: session-23.scope: Consumed 1.740s CPU time, 26.6M memory peak. Sep 9 21:17:00.554366 systemd-logind[1501]: Session 23 logged out. Waiting for processes to exit. Sep 9 21:17:00.557379 systemd[1]: Started sshd@23-10.0.0.61:22-10.0.0.1:37274.service - OpenSSH per-connection server daemon (10.0.0.1:37274). Sep 9 21:17:00.557901 systemd-logind[1501]: Removed session 23. Sep 9 21:17:00.611755 sshd[4412]: Accepted publickey for core from 10.0.0.1 port 37274 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:17:00.612986 sshd-session[4412]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:17:00.616837 systemd-logind[1501]: New session 24 of user core. Sep 9 21:17:00.623707 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 9 21:17:01.295693 kubelet[2650]: I0909 21:17:01.295657 2650 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1252fff5-664a-44d8-975a-0410271d86a6" path="/var/lib/kubelet/pods/1252fff5-664a-44d8-975a-0410271d86a6/volumes" Sep 9 21:17:01.296188 kubelet[2650]: I0909 21:17:01.296164 2650 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cca4563c-beab-418e-ae3d-77f098b6fdc1" path="/var/lib/kubelet/pods/cca4563c-beab-418e-ae3d-77f098b6fdc1/volumes" Sep 9 21:17:01.360696 sshd[4415]: Connection closed by 10.0.0.1 port 37274 Sep 9 21:17:01.361002 sshd-session[4412]: pam_unix(sshd:session): session closed for user core Sep 9 21:17:01.372044 systemd[1]: sshd@23-10.0.0.61:22-10.0.0.1:37274.service: Deactivated successfully. Sep 9 21:17:01.373877 systemd[1]: session-24.scope: Deactivated successfully. Sep 9 21:17:01.376771 systemd-logind[1501]: Session 24 logged out. Waiting for processes to exit. Sep 9 21:17:01.377665 kubelet[2650]: E0909 21:17:01.377077 2650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1252fff5-664a-44d8-975a-0410271d86a6" containerName="mount-bpf-fs" Sep 9 21:17:01.377665 kubelet[2650]: E0909 21:17:01.377099 2650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1252fff5-664a-44d8-975a-0410271d86a6" containerName="clean-cilium-state" Sep 9 21:17:01.377665 kubelet[2650]: E0909 21:17:01.377107 2650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1252fff5-664a-44d8-975a-0410271d86a6" containerName="cilium-agent" Sep 9 21:17:01.377665 kubelet[2650]: E0909 21:17:01.377114 2650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1252fff5-664a-44d8-975a-0410271d86a6" containerName="apply-sysctl-overwrites" Sep 9 21:17:01.377665 kubelet[2650]: E0909 21:17:01.377121 2650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cca4563c-beab-418e-ae3d-77f098b6fdc1" containerName="cilium-operator" Sep 9 21:17:01.377665 kubelet[2650]: E0909 21:17:01.377127 2650 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="1252fff5-664a-44d8-975a-0410271d86a6" containerName="mount-cgroup" Sep 9 21:17:01.377665 kubelet[2650]: I0909 21:17:01.377152 2650 memory_manager.go:354] "RemoveStaleState removing state" podUID="1252fff5-664a-44d8-975a-0410271d86a6" containerName="cilium-agent" Sep 9 21:17:01.377665 kubelet[2650]: I0909 21:17:01.377159 2650 memory_manager.go:354] "RemoveStaleState removing state" podUID="cca4563c-beab-418e-ae3d-77f098b6fdc1" containerName="cilium-operator" Sep 9 21:17:01.378114 systemd[1]: Started sshd@24-10.0.0.61:22-10.0.0.1:37284.service - OpenSSH per-connection server daemon (10.0.0.1:37284). Sep 9 21:17:01.381665 systemd-logind[1501]: Removed session 24. Sep 9 21:17:01.387979 systemd[1]: Created slice kubepods-burstable-pod01814052_a194_4dba_b879_366df3ea06b4.slice - libcontainer container kubepods-burstable-pod01814052_a194_4dba_b879_366df3ea06b4.slice. Sep 9 21:17:01.443362 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 37284 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:17:01.444425 sshd-session[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:17:01.448024 systemd-logind[1501]: New session 25 of user core. Sep 9 21:17:01.463743 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 9 21:17:01.512461 sshd[4430]: Connection closed by 10.0.0.1 port 37284 Sep 9 21:17:01.512760 sshd-session[4427]: pam_unix(sshd:session): session closed for user core Sep 9 21:17:01.522433 systemd[1]: sshd@24-10.0.0.61:22-10.0.0.1:37284.service: Deactivated successfully. Sep 9 21:17:01.524716 systemd[1]: session-25.scope: Deactivated successfully. Sep 9 21:17:01.525387 systemd-logind[1501]: Session 25 logged out. Waiting for processes to exit. Sep 9 21:17:01.527514 systemd[1]: Started sshd@25-10.0.0.61:22-10.0.0.1:37286.service - OpenSSH per-connection server daemon (10.0.0.1:37286). Sep 9 21:17:01.528171 systemd-logind[1501]: Removed session 25. Sep 9 21:17:01.537675 kubelet[2650]: I0909 21:17:01.537637 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/01814052-a194-4dba-b879-366df3ea06b4-hostproc\") pod \"cilium-5nftf\" (UID: \"01814052-a194-4dba-b879-366df3ea06b4\") " pod="kube-system/cilium-5nftf" Sep 9 21:17:01.537675 kubelet[2650]: I0909 21:17:01.537678 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/01814052-a194-4dba-b879-366df3ea06b4-bpf-maps\") pod \"cilium-5nftf\" (UID: \"01814052-a194-4dba-b879-366df3ea06b4\") " pod="kube-system/cilium-5nftf" Sep 9 21:17:01.537772 kubelet[2650]: I0909 21:17:01.537697 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/01814052-a194-4dba-b879-366df3ea06b4-cni-path\") pod \"cilium-5nftf\" (UID: \"01814052-a194-4dba-b879-366df3ea06b4\") " pod="kube-system/cilium-5nftf" Sep 9 21:17:01.537772 kubelet[2650]: I0909 21:17:01.537712 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/01814052-a194-4dba-b879-366df3ea06b4-lib-modules\") pod \"cilium-5nftf\" (UID: \"01814052-a194-4dba-b879-366df3ea06b4\") " pod="kube-system/cilium-5nftf" Sep 9 21:17:01.537772 kubelet[2650]: I0909 21:17:01.537727 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/01814052-a194-4dba-b879-366df3ea06b4-cilium-run\") pod \"cilium-5nftf\" (UID: \"01814052-a194-4dba-b879-366df3ea06b4\") " pod="kube-system/cilium-5nftf" Sep 9 21:17:01.537839 kubelet[2650]: I0909 21:17:01.537781 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/01814052-a194-4dba-b879-366df3ea06b4-cilium-config-path\") pod \"cilium-5nftf\" (UID: \"01814052-a194-4dba-b879-366df3ea06b4\") " pod="kube-system/cilium-5nftf" Sep 9 21:17:01.537839 kubelet[2650]: I0909 21:17:01.537798 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/01814052-a194-4dba-b879-366df3ea06b4-host-proc-sys-kernel\") pod \"cilium-5nftf\" (UID: \"01814052-a194-4dba-b879-366df3ea06b4\") " pod="kube-system/cilium-5nftf" Sep 9 21:17:01.537839 kubelet[2650]: I0909 21:17:01.537812 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/01814052-a194-4dba-b879-366df3ea06b4-etc-cni-netd\") pod \"cilium-5nftf\" (UID: \"01814052-a194-4dba-b879-366df3ea06b4\") " pod="kube-system/cilium-5nftf" Sep 9 21:17:01.537839 kubelet[2650]: I0909 21:17:01.537826 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/01814052-a194-4dba-b879-366df3ea06b4-clustermesh-secrets\") pod \"cilium-5nftf\" (UID: \"01814052-a194-4dba-b879-366df3ea06b4\") " pod="kube-system/cilium-5nftf" Sep 9 21:17:01.537916 kubelet[2650]: I0909 21:17:01.537843 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/01814052-a194-4dba-b879-366df3ea06b4-host-proc-sys-net\") pod \"cilium-5nftf\" (UID: \"01814052-a194-4dba-b879-366df3ea06b4\") " pod="kube-system/cilium-5nftf" Sep 9 21:17:01.537916 kubelet[2650]: I0909 21:17:01.537857 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/01814052-a194-4dba-b879-366df3ea06b4-cilium-cgroup\") pod \"cilium-5nftf\" (UID: \"01814052-a194-4dba-b879-366df3ea06b4\") " pod="kube-system/cilium-5nftf" Sep 9 21:17:01.537916 kubelet[2650]: I0909 21:17:01.537872 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/01814052-a194-4dba-b879-366df3ea06b4-cilium-ipsec-secrets\") pod \"cilium-5nftf\" (UID: \"01814052-a194-4dba-b879-366df3ea06b4\") " pod="kube-system/cilium-5nftf" Sep 9 21:17:01.537916 kubelet[2650]: I0909 21:17:01.537889 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/01814052-a194-4dba-b879-366df3ea06b4-hubble-tls\") pod \"cilium-5nftf\" (UID: \"01814052-a194-4dba-b879-366df3ea06b4\") " pod="kube-system/cilium-5nftf" Sep 9 21:17:01.537916 kubelet[2650]: I0909 21:17:01.537904 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44zrb\" (UniqueName: \"kubernetes.io/projected/01814052-a194-4dba-b879-366df3ea06b4-kube-api-access-44zrb\") pod \"cilium-5nftf\" (UID: \"01814052-a194-4dba-b879-366df3ea06b4\") " pod="kube-system/cilium-5nftf" Sep 9 21:17:01.538017 kubelet[2650]: I0909 21:17:01.537919 2650 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/01814052-a194-4dba-b879-366df3ea06b4-xtables-lock\") pod \"cilium-5nftf\" (UID: \"01814052-a194-4dba-b879-366df3ea06b4\") " pod="kube-system/cilium-5nftf" Sep 9 21:17:01.581098 sshd[4437]: Accepted publickey for core from 10.0.0.1 port 37286 ssh2: RSA SHA256:/os6YPp183JWsEVhW0evH0PAuBe7do22d4T7SoFOxUE Sep 9 21:17:01.582297 sshd-session[4437]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 21:17:01.586273 systemd-logind[1501]: New session 26 of user core. Sep 9 21:17:01.593701 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 9 21:17:01.701096 kubelet[2650]: E0909 21:17:01.700740 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:17:01.701326 containerd[1524]: time="2025-09-09T21:17:01.701187517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5nftf,Uid:01814052-a194-4dba-b879-366df3ea06b4,Namespace:kube-system,Attempt:0,}" Sep 9 21:17:01.717881 containerd[1524]: time="2025-09-09T21:17:01.717843638Z" level=info msg="connecting to shim aac890b17a75982dc28a6ccc893a40b042e05d34b713becc3060c7f48bcdd33e" address="unix:///run/containerd/s/174b53a7a0e1f295f962bcd7a397747e9d4480c5e29cce664bf23c48d404b630" namespace=k8s.io protocol=ttrpc version=3 Sep 9 21:17:01.743737 systemd[1]: Started cri-containerd-aac890b17a75982dc28a6ccc893a40b042e05d34b713becc3060c7f48bcdd33e.scope - libcontainer container aac890b17a75982dc28a6ccc893a40b042e05d34b713becc3060c7f48bcdd33e. Sep 9 21:17:01.762651 containerd[1524]: time="2025-09-09T21:17:01.762540109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5nftf,Uid:01814052-a194-4dba-b879-366df3ea06b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"aac890b17a75982dc28a6ccc893a40b042e05d34b713becc3060c7f48bcdd33e\"" Sep 9 21:17:01.763461 kubelet[2650]: E0909 21:17:01.763435 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:17:01.768193 containerd[1524]: time="2025-09-09T21:17:01.768164123Z" level=info msg="CreateContainer within sandbox \"aac890b17a75982dc28a6ccc893a40b042e05d34b713becc3060c7f48bcdd33e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 9 21:17:01.774126 containerd[1524]: time="2025-09-09T21:17:01.774099298Z" level=info msg="Container fdc5f197fa75816aba2e1b67548da392292edadf245a200b29e41e4f2b3d67aa: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:17:01.779464 containerd[1524]: time="2025-09-09T21:17:01.779415391Z" level=info msg="CreateContainer within sandbox \"aac890b17a75982dc28a6ccc893a40b042e05d34b713becc3060c7f48bcdd33e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fdc5f197fa75816aba2e1b67548da392292edadf245a200b29e41e4f2b3d67aa\"" Sep 9 21:17:01.779839 containerd[1524]: time="2025-09-09T21:17:01.779820552Z" level=info msg="StartContainer for \"fdc5f197fa75816aba2e1b67548da392292edadf245a200b29e41e4f2b3d67aa\"" Sep 9 21:17:01.780782 containerd[1524]: time="2025-09-09T21:17:01.780752714Z" level=info msg="connecting to shim fdc5f197fa75816aba2e1b67548da392292edadf245a200b29e41e4f2b3d67aa" address="unix:///run/containerd/s/174b53a7a0e1f295f962bcd7a397747e9d4480c5e29cce664bf23c48d404b630" protocol=ttrpc version=3 Sep 9 21:17:01.802734 systemd[1]: Started cri-containerd-fdc5f197fa75816aba2e1b67548da392292edadf245a200b29e41e4f2b3d67aa.scope - libcontainer container fdc5f197fa75816aba2e1b67548da392292edadf245a200b29e41e4f2b3d67aa. Sep 9 21:17:01.829720 containerd[1524]: time="2025-09-09T21:17:01.829679636Z" level=info msg="StartContainer for \"fdc5f197fa75816aba2e1b67548da392292edadf245a200b29e41e4f2b3d67aa\" returns successfully" Sep 9 21:17:01.838605 systemd[1]: cri-containerd-fdc5f197fa75816aba2e1b67548da392292edadf245a200b29e41e4f2b3d67aa.scope: Deactivated successfully. Sep 9 21:17:01.840425 containerd[1524]: time="2025-09-09T21:17:01.840325942Z" level=info msg="received exit event container_id:\"fdc5f197fa75816aba2e1b67548da392292edadf245a200b29e41e4f2b3d67aa\" id:\"fdc5f197fa75816aba2e1b67548da392292edadf245a200b29e41e4f2b3d67aa\" pid:4510 exited_at:{seconds:1757452621 nanos:840026542}" Sep 9 21:17:01.840621 containerd[1524]: time="2025-09-09T21:17:01.840391743Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fdc5f197fa75816aba2e1b67548da392292edadf245a200b29e41e4f2b3d67aa\" id:\"fdc5f197fa75816aba2e1b67548da392292edadf245a200b29e41e4f2b3d67aa\" pid:4510 exited_at:{seconds:1757452621 nanos:840026542}" Sep 9 21:17:02.485110 kubelet[2650]: E0909 21:17:02.485059 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:17:02.490344 containerd[1524]: time="2025-09-09T21:17:02.490268217Z" level=info msg="CreateContainer within sandbox \"aac890b17a75982dc28a6ccc893a40b042e05d34b713becc3060c7f48bcdd33e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 9 21:17:02.498527 containerd[1524]: time="2025-09-09T21:17:02.498490878Z" level=info msg="Container 9bdf591643983e6c174590756c1078830421524156abcf85310ab9cb0383c52e: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:17:02.505808 containerd[1524]: time="2025-09-09T21:17:02.505772257Z" level=info msg="CreateContainer within sandbox \"aac890b17a75982dc28a6ccc893a40b042e05d34b713becc3060c7f48bcdd33e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9bdf591643983e6c174590756c1078830421524156abcf85310ab9cb0383c52e\"" Sep 9 21:17:02.507159 containerd[1524]: time="2025-09-09T21:17:02.506156458Z" level=info msg="StartContainer for \"9bdf591643983e6c174590756c1078830421524156abcf85310ab9cb0383c52e\"" Sep 9 21:17:02.508262 containerd[1524]: time="2025-09-09T21:17:02.508232624Z" level=info msg="connecting to shim 9bdf591643983e6c174590756c1078830421524156abcf85310ab9cb0383c52e" address="unix:///run/containerd/s/174b53a7a0e1f295f962bcd7a397747e9d4480c5e29cce664bf23c48d404b630" protocol=ttrpc version=3 Sep 9 21:17:02.529759 systemd[1]: Started cri-containerd-9bdf591643983e6c174590756c1078830421524156abcf85310ab9cb0383c52e.scope - libcontainer container 9bdf591643983e6c174590756c1078830421524156abcf85310ab9cb0383c52e. Sep 9 21:17:02.551836 containerd[1524]: time="2025-09-09T21:17:02.551804737Z" level=info msg="StartContainer for \"9bdf591643983e6c174590756c1078830421524156abcf85310ab9cb0383c52e\" returns successfully" Sep 9 21:17:02.557373 systemd[1]: cri-containerd-9bdf591643983e6c174590756c1078830421524156abcf85310ab9cb0383c52e.scope: Deactivated successfully. Sep 9 21:17:02.558007 containerd[1524]: time="2025-09-09T21:17:02.557981793Z" level=info msg="received exit event container_id:\"9bdf591643983e6c174590756c1078830421524156abcf85310ab9cb0383c52e\" id:\"9bdf591643983e6c174590756c1078830421524156abcf85310ab9cb0383c52e\" pid:4554 exited_at:{seconds:1757452622 nanos:557756393}" Sep 9 21:17:02.558255 containerd[1524]: time="2025-09-09T21:17:02.558032394Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9bdf591643983e6c174590756c1078830421524156abcf85310ab9cb0383c52e\" id:\"9bdf591643983e6c174590756c1078830421524156abcf85310ab9cb0383c52e\" pid:4554 exited_at:{seconds:1757452622 nanos:557756393}" Sep 9 21:17:03.496667 kubelet[2650]: E0909 21:17:03.495652 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:17:03.499529 containerd[1524]: time="2025-09-09T21:17:03.499489386Z" level=info msg="CreateContainer within sandbox \"aac890b17a75982dc28a6ccc893a40b042e05d34b713becc3060c7f48bcdd33e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 9 21:17:03.508756 containerd[1524]: time="2025-09-09T21:17:03.507670408Z" level=info msg="Container 8f5219ff7def46fea204c2cb0db1afe566bf116b081601b67280f7d6f99cb967: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:17:03.517824 containerd[1524]: time="2025-09-09T21:17:03.517777156Z" level=info msg="CreateContainer within sandbox \"aac890b17a75982dc28a6ccc893a40b042e05d34b713becc3060c7f48bcdd33e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8f5219ff7def46fea204c2cb0db1afe566bf116b081601b67280f7d6f99cb967\"" Sep 9 21:17:03.518502 containerd[1524]: time="2025-09-09T21:17:03.518421797Z" level=info msg="StartContainer for \"8f5219ff7def46fea204c2cb0db1afe566bf116b081601b67280f7d6f99cb967\"" Sep 9 21:17:03.520108 containerd[1524]: time="2025-09-09T21:17:03.520080282Z" level=info msg="connecting to shim 8f5219ff7def46fea204c2cb0db1afe566bf116b081601b67280f7d6f99cb967" address="unix:///run/containerd/s/174b53a7a0e1f295f962bcd7a397747e9d4480c5e29cce664bf23c48d404b630" protocol=ttrpc version=3 Sep 9 21:17:03.548733 systemd[1]: Started cri-containerd-8f5219ff7def46fea204c2cb0db1afe566bf116b081601b67280f7d6f99cb967.scope - libcontainer container 8f5219ff7def46fea204c2cb0db1afe566bf116b081601b67280f7d6f99cb967. Sep 9 21:17:03.578240 systemd[1]: cri-containerd-8f5219ff7def46fea204c2cb0db1afe566bf116b081601b67280f7d6f99cb967.scope: Deactivated successfully. Sep 9 21:17:03.580779 containerd[1524]: time="2025-09-09T21:17:03.580748687Z" level=info msg="received exit event container_id:\"8f5219ff7def46fea204c2cb0db1afe566bf116b081601b67280f7d6f99cb967\" id:\"8f5219ff7def46fea204c2cb0db1afe566bf116b081601b67280f7d6f99cb967\" pid:4600 exited_at:{seconds:1757452623 nanos:579717404}" Sep 9 21:17:03.581235 containerd[1524]: time="2025-09-09T21:17:03.581103928Z" level=info msg="StartContainer for \"8f5219ff7def46fea204c2cb0db1afe566bf116b081601b67280f7d6f99cb967\" returns successfully" Sep 9 21:17:03.581329 containerd[1524]: time="2025-09-09T21:17:03.581191088Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f5219ff7def46fea204c2cb0db1afe566bf116b081601b67280f7d6f99cb967\" id:\"8f5219ff7def46fea204c2cb0db1afe566bf116b081601b67280f7d6f99cb967\" pid:4600 exited_at:{seconds:1757452623 nanos:579717404}" Sep 9 21:17:03.600500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f5219ff7def46fea204c2cb0db1afe566bf116b081601b67280f7d6f99cb967-rootfs.mount: Deactivated successfully. Sep 9 21:17:04.500246 kubelet[2650]: E0909 21:17:04.500204 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:17:04.503498 containerd[1524]: time="2025-09-09T21:17:04.503449377Z" level=info msg="CreateContainer within sandbox \"aac890b17a75982dc28a6ccc893a40b042e05d34b713becc3060c7f48bcdd33e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 9 21:17:04.512433 containerd[1524]: time="2025-09-09T21:17:04.511845201Z" level=info msg="Container 1b8405c9fc3a9e08fc217002f0036aec4fa563d0a18307ef051b57bcdf37c50a: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:17:04.520365 containerd[1524]: time="2025-09-09T21:17:04.520330025Z" level=info msg="CreateContainer within sandbox \"aac890b17a75982dc28a6ccc893a40b042e05d34b713becc3060c7f48bcdd33e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1b8405c9fc3a9e08fc217002f0036aec4fa563d0a18307ef051b57bcdf37c50a\"" Sep 9 21:17:04.521162 containerd[1524]: time="2025-09-09T21:17:04.521134467Z" level=info msg="StartContainer for \"1b8405c9fc3a9e08fc217002f0036aec4fa563d0a18307ef051b57bcdf37c50a\"" Sep 9 21:17:04.522376 containerd[1524]: time="2025-09-09T21:17:04.522349710Z" level=info msg="connecting to shim 1b8405c9fc3a9e08fc217002f0036aec4fa563d0a18307ef051b57bcdf37c50a" address="unix:///run/containerd/s/174b53a7a0e1f295f962bcd7a397747e9d4480c5e29cce664bf23c48d404b630" protocol=ttrpc version=3 Sep 9 21:17:04.545745 systemd[1]: Started cri-containerd-1b8405c9fc3a9e08fc217002f0036aec4fa563d0a18307ef051b57bcdf37c50a.scope - libcontainer container 1b8405c9fc3a9e08fc217002f0036aec4fa563d0a18307ef051b57bcdf37c50a. Sep 9 21:17:04.567052 systemd[1]: cri-containerd-1b8405c9fc3a9e08fc217002f0036aec4fa563d0a18307ef051b57bcdf37c50a.scope: Deactivated successfully. Sep 9 21:17:04.568240 containerd[1524]: time="2025-09-09T21:17:04.568202400Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1b8405c9fc3a9e08fc217002f0036aec4fa563d0a18307ef051b57bcdf37c50a\" id:\"1b8405c9fc3a9e08fc217002f0036aec4fa563d0a18307ef051b57bcdf37c50a\" pid:4638 exited_at:{seconds:1757452624 nanos:567889680}" Sep 9 21:17:04.569596 containerd[1524]: time="2025-09-09T21:17:04.568659002Z" level=info msg="received exit event container_id:\"1b8405c9fc3a9e08fc217002f0036aec4fa563d0a18307ef051b57bcdf37c50a\" id:\"1b8405c9fc3a9e08fc217002f0036aec4fa563d0a18307ef051b57bcdf37c50a\" pid:4638 exited_at:{seconds:1757452624 nanos:567889680}" Sep 9 21:17:04.575363 containerd[1524]: time="2025-09-09T21:17:04.575332781Z" level=info msg="StartContainer for \"1b8405c9fc3a9e08fc217002f0036aec4fa563d0a18307ef051b57bcdf37c50a\" returns successfully" Sep 9 21:17:04.584688 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b8405c9fc3a9e08fc217002f0036aec4fa563d0a18307ef051b57bcdf37c50a-rootfs.mount: Deactivated successfully. Sep 9 21:17:05.346859 kubelet[2650]: E0909 21:17:05.346815 2650 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 9 21:17:05.504714 kubelet[2650]: E0909 21:17:05.504684 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:17:05.508044 containerd[1524]: time="2025-09-09T21:17:05.508006442Z" level=info msg="CreateContainer within sandbox \"aac890b17a75982dc28a6ccc893a40b042e05d34b713becc3060c7f48bcdd33e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 9 21:17:05.521859 containerd[1524]: time="2025-09-09T21:17:05.519917557Z" level=info msg="Container b2fe8468ae338bfd8d5f95f5a2d59d4f74f7a8a737cfb071fb6d4fb6b3d88a2e: CDI devices from CRI Config.CDIDevices: []" Sep 9 21:17:05.528844 containerd[1524]: time="2025-09-09T21:17:05.528810863Z" level=info msg="CreateContainer within sandbox \"aac890b17a75982dc28a6ccc893a40b042e05d34b713becc3060c7f48bcdd33e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b2fe8468ae338bfd8d5f95f5a2d59d4f74f7a8a737cfb071fb6d4fb6b3d88a2e\"" Sep 9 21:17:05.529441 containerd[1524]: time="2025-09-09T21:17:05.529419065Z" level=info msg="StartContainer for \"b2fe8468ae338bfd8d5f95f5a2d59d4f74f7a8a737cfb071fb6d4fb6b3d88a2e\"" Sep 9 21:17:05.533549 containerd[1524]: time="2025-09-09T21:17:05.533514757Z" level=info msg="connecting to shim b2fe8468ae338bfd8d5f95f5a2d59d4f74f7a8a737cfb071fb6d4fb6b3d88a2e" address="unix:///run/containerd/s/174b53a7a0e1f295f962bcd7a397747e9d4480c5e29cce664bf23c48d404b630" protocol=ttrpc version=3 Sep 9 21:17:05.557739 systemd[1]: Started cri-containerd-b2fe8468ae338bfd8d5f95f5a2d59d4f74f7a8a737cfb071fb6d4fb6b3d88a2e.scope - libcontainer container b2fe8468ae338bfd8d5f95f5a2d59d4f74f7a8a737cfb071fb6d4fb6b3d88a2e. Sep 9 21:17:05.586216 containerd[1524]: time="2025-09-09T21:17:05.586181072Z" level=info msg="StartContainer for \"b2fe8468ae338bfd8d5f95f5a2d59d4f74f7a8a737cfb071fb6d4fb6b3d88a2e\" returns successfully" Sep 9 21:17:05.635397 containerd[1524]: time="2025-09-09T21:17:05.635243257Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2fe8468ae338bfd8d5f95f5a2d59d4f74f7a8a737cfb071fb6d4fb6b3d88a2e\" id:\"720d53051c456c7799e9bb0b4ae2ef79ef2d9a985be359df3efa831a3e7821fc\" pid:4704 exited_at:{seconds:1757452625 nanos:634840376}" Sep 9 21:17:05.855693 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 9 21:17:06.511758 kubelet[2650]: E0909 21:17:06.511710 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:17:06.525899 kubelet[2650]: I0909 21:17:06.525823 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5nftf" podStartSLOduration=5.5257966960000005 podStartE2EDuration="5.525796696s" podCreationTimestamp="2025-09-09 21:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-09 21:17:06.525609056 +0000 UTC m=+81.311453152" watchObservedRunningTime="2025-09-09 21:17:06.525796696 +0000 UTC m=+81.311640792" Sep 9 21:17:07.202723 kubelet[2650]: I0909 21:17:07.202664 2650 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-09T21:17:07Z","lastTransitionTime":"2025-09-09T21:17:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 9 21:17:07.702331 kubelet[2650]: E0909 21:17:07.702279 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:17:08.142613 containerd[1524]: time="2025-09-09T21:17:08.141808961Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2fe8468ae338bfd8d5f95f5a2d59d4f74f7a8a737cfb071fb6d4fb6b3d88a2e\" id:\"aad399e93223c9fda5082e3f7001156930319b0edb30c9c637d5b3183e242121\" pid:5084 exit_status:1 exited_at:{seconds:1757452628 nanos:141506800}" Sep 9 21:17:08.670314 systemd-networkd[1449]: lxc_health: Link UP Sep 9 21:17:08.672360 systemd-networkd[1449]: lxc_health: Gained carrier Sep 9 21:17:09.703292 kubelet[2650]: E0909 21:17:09.703246 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:17:10.175740 systemd-networkd[1449]: lxc_health: Gained IPv6LL Sep 9 21:17:10.271343 containerd[1524]: time="2025-09-09T21:17:10.271295561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2fe8468ae338bfd8d5f95f5a2d59d4f74f7a8a737cfb071fb6d4fb6b3d88a2e\" id:\"5d2eb8b84b8449041cae5be6491118d6cb92455512ffe0965cac6d471542b56c\" pid:5241 exited_at:{seconds:1757452630 nanos:270653238}" Sep 9 21:17:10.519158 kubelet[2650]: E0909 21:17:10.519122 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:17:11.520380 kubelet[2650]: E0909 21:17:11.520121 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 9 21:17:12.442330 containerd[1524]: time="2025-09-09T21:17:12.442112849Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2fe8468ae338bfd8d5f95f5a2d59d4f74f7a8a737cfb071fb6d4fb6b3d88a2e\" id:\"9cc1958c686f59353a67cdd7edd0af6379796591862bfc90f484439184efc59f\" pid:5276 exited_at:{seconds:1757452632 nanos:441670208}" Sep 9 21:17:14.557913 containerd[1524]: time="2025-09-09T21:17:14.557868375Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2fe8468ae338bfd8d5f95f5a2d59d4f74f7a8a737cfb071fb6d4fb6b3d88a2e\" id:\"bda899c3d77f2e7a95ae6d6d11aa91279fcfcf70f500d763c0a1c90f9d3cf324\" pid:5301 exited_at:{seconds:1757452634 nanos:557593494}" Sep 9 21:17:16.659348 containerd[1524]: time="2025-09-09T21:17:16.659190526Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2fe8468ae338bfd8d5f95f5a2d59d4f74f7a8a737cfb071fb6d4fb6b3d88a2e\" id:\"8119f7b7dc4e6ea881e123c7171a6b2fe072ef5fc647ce7b46df186e6467a738\" pid:5325 exited_at:{seconds:1757452636 nanos:658593803}" Sep 9 21:17:16.663744 sshd[4440]: Connection closed by 10.0.0.1 port 37286 Sep 9 21:17:16.664069 sshd-session[4437]: pam_unix(sshd:session): session closed for user core Sep 9 21:17:16.667774 systemd-logind[1501]: Session 26 logged out. Waiting for processes to exit. Sep 9 21:17:16.668045 systemd[1]: sshd@25-10.0.0.61:22-10.0.0.1:37286.service: Deactivated successfully. Sep 9 21:17:16.669588 systemd[1]: session-26.scope: Deactivated successfully. Sep 9 21:17:16.670796 systemd-logind[1501]: Removed session 26.