Sep 11 04:43:05.760984 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 11 04:43:05.761003 kernel: Linux version 6.12.46-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Sep 11 03:18:39 -00 2025 Sep 11 04:43:05.761012 kernel: KASLR enabled Sep 11 04:43:05.761018 kernel: efi: EFI v2.7 by EDK II Sep 11 04:43:05.761024 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 11 04:43:05.761029 kernel: random: crng init done Sep 11 04:43:05.761036 kernel: secureboot: Secure boot disabled Sep 11 04:43:05.761042 kernel: ACPI: Early table checksum verification disabled Sep 11 04:43:05.761047 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 11 04:43:05.761054 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 11 04:43:05.761061 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 04:43:05.761067 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 04:43:05.761072 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 04:43:05.761078 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 04:43:05.761085 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 04:43:05.761092 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 04:43:05.761099 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 04:43:05.761105 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 04:43:05.761111 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 11 04:43:05.761117 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 11 04:43:05.761124 kernel: ACPI: Use ACPI SPCR as default console: No Sep 11 04:43:05.761130 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 04:43:05.761136 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 11 04:43:05.761142 kernel: Zone ranges: Sep 11 04:43:05.761148 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 04:43:05.761155 kernel: DMA32 empty Sep 11 04:43:05.761161 kernel: Normal empty Sep 11 04:43:05.761167 kernel: Device empty Sep 11 04:43:05.761173 kernel: Movable zone start for each node Sep 11 04:43:05.761179 kernel: Early memory node ranges Sep 11 04:43:05.761185 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 11 04:43:05.761191 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 11 04:43:05.761197 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 11 04:43:05.761203 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 11 04:43:05.761209 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 11 04:43:05.761240 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 11 04:43:05.761249 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 11 04:43:05.761257 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 11 04:43:05.761263 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 11 04:43:05.761270 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 11 04:43:05.761279 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 11 04:43:05.761285 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 11 04:43:05.761292 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 11 04:43:05.761299 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 11 04:43:05.761306 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 11 04:43:05.761313 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 11 04:43:05.761319 kernel: psci: probing for conduit method from ACPI. Sep 11 04:43:05.761325 kernel: psci: PSCIv1.1 detected in firmware. Sep 11 04:43:05.761332 kernel: psci: Using standard PSCI v0.2 function IDs Sep 11 04:43:05.761350 kernel: psci: Trusted OS migration not required Sep 11 04:43:05.761357 kernel: psci: SMC Calling Convention v1.1 Sep 11 04:43:05.761363 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 11 04:43:05.761370 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 11 04:43:05.761378 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 11 04:43:05.761384 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 11 04:43:05.761391 kernel: Detected PIPT I-cache on CPU0 Sep 11 04:43:05.761397 kernel: CPU features: detected: GIC system register CPU interface Sep 11 04:43:05.761403 kernel: CPU features: detected: Spectre-v4 Sep 11 04:43:05.761410 kernel: CPU features: detected: Spectre-BHB Sep 11 04:43:05.761416 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 11 04:43:05.761422 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 11 04:43:05.761429 kernel: CPU features: detected: ARM erratum 1418040 Sep 11 04:43:05.761435 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 11 04:43:05.761441 kernel: alternatives: applying boot alternatives Sep 11 04:43:05.761449 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ef595a17d54d2c763572f89e076b038d4e5b64e896cb23d2c32cc64c178d3d5c Sep 11 04:43:05.761457 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 11 04:43:05.761464 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 11 04:43:05.761471 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 11 04:43:05.761477 kernel: Fallback order for Node 0: 0 Sep 11 04:43:05.761484 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 11 04:43:05.761490 kernel: Policy zone: DMA Sep 11 04:43:05.761497 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 11 04:43:05.761503 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 11 04:43:05.761509 kernel: software IO TLB: area num 4. Sep 11 04:43:05.761516 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 11 04:43:05.761522 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 11 04:43:05.761530 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 11 04:43:05.761536 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 11 04:43:05.761543 kernel: rcu: RCU event tracing is enabled. Sep 11 04:43:05.761550 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 11 04:43:05.761556 kernel: Trampoline variant of Tasks RCU enabled. Sep 11 04:43:05.761562 kernel: Tracing variant of Tasks RCU enabled. Sep 11 04:43:05.761569 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 11 04:43:05.761575 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 11 04:43:05.761582 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 04:43:05.761588 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 11 04:43:05.761595 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 11 04:43:05.761602 kernel: GICv3: 256 SPIs implemented Sep 11 04:43:05.761608 kernel: GICv3: 0 Extended SPIs implemented Sep 11 04:43:05.761615 kernel: Root IRQ handler: gic_handle_irq Sep 11 04:43:05.761621 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 11 04:43:05.761627 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 11 04:43:05.761633 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 11 04:43:05.761640 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 11 04:43:05.761646 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 11 04:43:05.761652 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 11 04:43:05.761659 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 11 04:43:05.761665 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 11 04:43:05.761671 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 11 04:43:05.761679 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 04:43:05.761685 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 11 04:43:05.761692 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 11 04:43:05.761698 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 11 04:43:05.761704 kernel: arm-pv: using stolen time PV Sep 11 04:43:05.761711 kernel: Console: colour dummy device 80x25 Sep 11 04:43:05.761717 kernel: ACPI: Core revision 20240827 Sep 11 04:43:05.761724 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 11 04:43:05.761731 kernel: pid_max: default: 32768 minimum: 301 Sep 11 04:43:05.761737 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 11 04:43:05.761745 kernel: landlock: Up and running. Sep 11 04:43:05.761752 kernel: SELinux: Initializing. Sep 11 04:43:05.761758 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 04:43:05.761765 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 11 04:43:05.761771 kernel: rcu: Hierarchical SRCU implementation. Sep 11 04:43:05.761778 kernel: rcu: Max phase no-delay instances is 400. Sep 11 04:43:05.761784 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 11 04:43:05.761791 kernel: Remapping and enabling EFI services. Sep 11 04:43:05.761797 kernel: smp: Bringing up secondary CPUs ... Sep 11 04:43:05.761809 kernel: Detected PIPT I-cache on CPU1 Sep 11 04:43:05.761815 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 11 04:43:05.761822 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 11 04:43:05.761830 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 04:43:05.761837 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 11 04:43:05.761844 kernel: Detected PIPT I-cache on CPU2 Sep 11 04:43:05.761851 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 11 04:43:05.761858 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 11 04:43:05.761866 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 04:43:05.761872 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 11 04:43:05.761879 kernel: Detected PIPT I-cache on CPU3 Sep 11 04:43:05.761886 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 11 04:43:05.761893 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 11 04:43:05.761900 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 11 04:43:05.761906 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 11 04:43:05.761913 kernel: smp: Brought up 1 node, 4 CPUs Sep 11 04:43:05.761920 kernel: SMP: Total of 4 processors activated. Sep 11 04:43:05.761928 kernel: CPU: All CPU(s) started at EL1 Sep 11 04:43:05.761934 kernel: CPU features: detected: 32-bit EL0 Support Sep 11 04:43:05.761941 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 11 04:43:05.761948 kernel: CPU features: detected: Common not Private translations Sep 11 04:43:05.761955 kernel: CPU features: detected: CRC32 instructions Sep 11 04:43:05.761961 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 11 04:43:05.761968 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 11 04:43:05.761975 kernel: CPU features: detected: LSE atomic instructions Sep 11 04:43:05.761982 kernel: CPU features: detected: Privileged Access Never Sep 11 04:43:05.761989 kernel: CPU features: detected: RAS Extension Support Sep 11 04:43:05.761997 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 11 04:43:05.762004 kernel: alternatives: applying system-wide alternatives Sep 11 04:43:05.762011 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 11 04:43:05.762018 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2436K rwdata, 9064K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Sep 11 04:43:05.762025 kernel: devtmpfs: initialized Sep 11 04:43:05.762032 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 11 04:43:05.762039 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 11 04:43:05.762046 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 11 04:43:05.762054 kernel: 0 pages in range for non-PLT usage Sep 11 04:43:05.762061 kernel: 508576 pages in range for PLT usage Sep 11 04:43:05.762067 kernel: pinctrl core: initialized pinctrl subsystem Sep 11 04:43:05.762080 kernel: SMBIOS 3.0.0 present. Sep 11 04:43:05.762087 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 11 04:43:05.762095 kernel: DMI: Memory slots populated: 1/1 Sep 11 04:43:05.762102 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 11 04:43:05.762109 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 11 04:43:05.762116 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 11 04:43:05.762123 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 11 04:43:05.762131 kernel: audit: initializing netlink subsys (disabled) Sep 11 04:43:05.762139 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 11 04:43:05.762146 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 11 04:43:05.762153 kernel: cpuidle: using governor menu Sep 11 04:43:05.762160 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 11 04:43:05.762167 kernel: ASID allocator initialised with 32768 entries Sep 11 04:43:05.762174 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 11 04:43:05.762181 kernel: Serial: AMBA PL011 UART driver Sep 11 04:43:05.762188 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 11 04:43:05.762196 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 11 04:43:05.762204 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 11 04:43:05.762210 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 11 04:43:05.762282 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 11 04:43:05.762291 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 11 04:43:05.762298 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 11 04:43:05.762305 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 11 04:43:05.762312 kernel: ACPI: Added _OSI(Module Device) Sep 11 04:43:05.762319 kernel: ACPI: Added _OSI(Processor Device) Sep 11 04:43:05.762329 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 11 04:43:05.762335 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 11 04:43:05.762342 kernel: ACPI: Interpreter enabled Sep 11 04:43:05.762349 kernel: ACPI: Using GIC for interrupt routing Sep 11 04:43:05.762356 kernel: ACPI: MCFG table detected, 1 entries Sep 11 04:43:05.762363 kernel: ACPI: CPU0 has been hot-added Sep 11 04:43:05.762370 kernel: ACPI: CPU1 has been hot-added Sep 11 04:43:05.762377 kernel: ACPI: CPU2 has been hot-added Sep 11 04:43:05.762384 kernel: ACPI: CPU3 has been hot-added Sep 11 04:43:05.762392 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 11 04:43:05.762398 kernel: printk: legacy console [ttyAMA0] enabled Sep 11 04:43:05.762405 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 11 04:43:05.762530 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 11 04:43:05.762597 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 11 04:43:05.762672 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 11 04:43:05.762731 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 11 04:43:05.762792 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 11 04:43:05.762802 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 11 04:43:05.762809 kernel: PCI host bridge to bus 0000:00 Sep 11 04:43:05.762873 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 11 04:43:05.762927 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 11 04:43:05.762980 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 11 04:43:05.763032 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 11 04:43:05.763114 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 11 04:43:05.763182 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 11 04:43:05.763271 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 11 04:43:05.763336 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 11 04:43:05.763396 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 11 04:43:05.763455 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 11 04:43:05.763520 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 11 04:43:05.763595 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 11 04:43:05.763657 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 11 04:43:05.763717 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 11 04:43:05.763772 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 11 04:43:05.763781 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 11 04:43:05.763788 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 11 04:43:05.763795 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 11 04:43:05.763803 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 11 04:43:05.763810 kernel: iommu: Default domain type: Translated Sep 11 04:43:05.763817 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 11 04:43:05.763824 kernel: efivars: Registered efivars operations Sep 11 04:43:05.763831 kernel: vgaarb: loaded Sep 11 04:43:05.763838 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 11 04:43:05.763851 kernel: VFS: Disk quotas dquot_6.6.0 Sep 11 04:43:05.763857 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 11 04:43:05.763864 kernel: pnp: PnP ACPI init Sep 11 04:43:05.763940 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 11 04:43:05.763950 kernel: pnp: PnP ACPI: found 1 devices Sep 11 04:43:05.763957 kernel: NET: Registered PF_INET protocol family Sep 11 04:43:05.763964 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 11 04:43:05.763971 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 11 04:43:05.763978 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 11 04:43:05.763985 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 11 04:43:05.763992 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 11 04:43:05.763999 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 11 04:43:05.764007 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 04:43:05.764014 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 11 04:43:05.764021 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 11 04:43:05.764028 kernel: PCI: CLS 0 bytes, default 64 Sep 11 04:43:05.764035 kernel: kvm [1]: HYP mode not available Sep 11 04:43:05.764041 kernel: Initialise system trusted keyrings Sep 11 04:43:05.764048 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 11 04:43:05.764055 kernel: Key type asymmetric registered Sep 11 04:43:05.764062 kernel: Asymmetric key parser 'x509' registered Sep 11 04:43:05.764070 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 11 04:43:05.764077 kernel: io scheduler mq-deadline registered Sep 11 04:43:05.764084 kernel: io scheduler kyber registered Sep 11 04:43:05.764091 kernel: io scheduler bfq registered Sep 11 04:43:05.764098 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 11 04:43:05.764105 kernel: ACPI: button: Power Button [PWRB] Sep 11 04:43:05.764112 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 11 04:43:05.764171 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 11 04:43:05.764180 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 11 04:43:05.764189 kernel: thunder_xcv, ver 1.0 Sep 11 04:43:05.764196 kernel: thunder_bgx, ver 1.0 Sep 11 04:43:05.764203 kernel: nicpf, ver 1.0 Sep 11 04:43:05.764209 kernel: nicvf, ver 1.0 Sep 11 04:43:05.764344 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 11 04:43:05.764403 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-11T04:43:05 UTC (1757565785) Sep 11 04:43:05.764413 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 11 04:43:05.764421 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 11 04:43:05.764431 kernel: watchdog: NMI not fully supported Sep 11 04:43:05.764438 kernel: watchdog: Hard watchdog permanently disabled Sep 11 04:43:05.764445 kernel: NET: Registered PF_INET6 protocol family Sep 11 04:43:05.764452 kernel: Segment Routing with IPv6 Sep 11 04:43:05.764459 kernel: In-situ OAM (IOAM) with IPv6 Sep 11 04:43:05.764466 kernel: NET: Registered PF_PACKET protocol family Sep 11 04:43:05.764473 kernel: Key type dns_resolver registered Sep 11 04:43:05.764480 kernel: registered taskstats version 1 Sep 11 04:43:05.764487 kernel: Loading compiled-in X.509 certificates Sep 11 04:43:05.764495 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.46-flatcar: b59d388cee5bf40f9111b7148ba3e51a29f91cd1' Sep 11 04:43:05.764502 kernel: Demotion targets for Node 0: null Sep 11 04:43:05.764509 kernel: Key type .fscrypt registered Sep 11 04:43:05.764516 kernel: Key type fscrypt-provisioning registered Sep 11 04:43:05.764523 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 11 04:43:05.764530 kernel: ima: Allocated hash algorithm: sha1 Sep 11 04:43:05.764537 kernel: ima: No architecture policies found Sep 11 04:43:05.764544 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 11 04:43:05.764550 kernel: clk: Disabling unused clocks Sep 11 04:43:05.764558 kernel: PM: genpd: Disabling unused power domains Sep 11 04:43:05.764565 kernel: Warning: unable to open an initial console. Sep 11 04:43:05.764572 kernel: Freeing unused kernel memory: 38912K Sep 11 04:43:05.764579 kernel: Run /init as init process Sep 11 04:43:05.764586 kernel: with arguments: Sep 11 04:43:05.764593 kernel: /init Sep 11 04:43:05.764599 kernel: with environment: Sep 11 04:43:05.764606 kernel: HOME=/ Sep 11 04:43:05.764613 kernel: TERM=linux Sep 11 04:43:05.764620 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 11 04:43:05.764628 systemd[1]: Successfully made /usr/ read-only. Sep 11 04:43:05.764638 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 04:43:05.764646 systemd[1]: Detected virtualization kvm. Sep 11 04:43:05.764654 systemd[1]: Detected architecture arm64. Sep 11 04:43:05.764661 systemd[1]: Running in initrd. Sep 11 04:43:05.764668 systemd[1]: No hostname configured, using default hostname. Sep 11 04:43:05.764677 systemd[1]: Hostname set to . Sep 11 04:43:05.764684 systemd[1]: Initializing machine ID from VM UUID. Sep 11 04:43:05.764692 systemd[1]: Queued start job for default target initrd.target. Sep 11 04:43:05.764699 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 04:43:05.764707 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 04:43:05.764715 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 11 04:43:05.764722 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 04:43:05.764730 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 11 04:43:05.764739 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 11 04:43:05.764748 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 11 04:43:05.764755 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 11 04:43:05.764763 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 04:43:05.764770 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 04:43:05.764778 systemd[1]: Reached target paths.target - Path Units. Sep 11 04:43:05.764785 systemd[1]: Reached target slices.target - Slice Units. Sep 11 04:43:05.764794 systemd[1]: Reached target swap.target - Swaps. Sep 11 04:43:05.764802 systemd[1]: Reached target timers.target - Timer Units. Sep 11 04:43:05.764809 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 04:43:05.764822 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 04:43:05.764834 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 11 04:43:05.764843 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 11 04:43:05.764853 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 04:43:05.764861 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 04:43:05.764869 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 04:43:05.764878 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 04:43:05.764885 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 11 04:43:05.764893 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 04:43:05.764900 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 11 04:43:05.764908 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 11 04:43:05.764915 systemd[1]: Starting systemd-fsck-usr.service... Sep 11 04:43:05.764923 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 04:43:05.764930 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 04:43:05.764939 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 04:43:05.764946 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 11 04:43:05.764954 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 04:43:05.764962 systemd[1]: Finished systemd-fsck-usr.service. Sep 11 04:43:05.764985 systemd-journald[244]: Collecting audit messages is disabled. Sep 11 04:43:05.765004 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 11 04:43:05.765012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 04:43:05.765020 systemd-journald[244]: Journal started Sep 11 04:43:05.765039 systemd-journald[244]: Runtime Journal (/run/log/journal/602eaed7ffb548059d1262e7ac39b8c9) is 6M, max 48.5M, 42.4M free. Sep 11 04:43:05.763874 systemd-modules-load[245]: Inserted module 'overlay' Sep 11 04:43:05.768582 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 04:43:05.770693 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 11 04:43:05.772383 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 04:43:05.778273 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 11 04:43:05.782245 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 11 04:43:05.784017 systemd-modules-load[245]: Inserted module 'br_netfilter' Sep 11 04:43:05.784947 kernel: Bridge firewalling registered Sep 11 04:43:05.788366 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 04:43:05.789612 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 04:43:05.792462 systemd-tmpfiles[267]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 11 04:43:05.793407 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 04:43:05.795686 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 04:43:05.797492 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 04:43:05.813566 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 04:43:05.815295 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 11 04:43:05.820482 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 04:43:05.822520 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 04:43:05.830147 dracut-cmdline[289]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ef595a17d54d2c763572f89e076b038d4e5b64e896cb23d2c32cc64c178d3d5c Sep 11 04:43:05.856898 systemd-resolved[295]: Positive Trust Anchors: Sep 11 04:43:05.856919 systemd-resolved[295]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 04:43:05.856950 systemd-resolved[295]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 04:43:05.861717 systemd-resolved[295]: Defaulting to hostname 'linux'. Sep 11 04:43:05.862700 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 04:43:05.864335 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 04:43:05.899252 kernel: SCSI subsystem initialized Sep 11 04:43:05.904255 kernel: Loading iSCSI transport class v2.0-870. Sep 11 04:43:05.911246 kernel: iscsi: registered transport (tcp) Sep 11 04:43:05.924255 kernel: iscsi: registered transport (qla4xxx) Sep 11 04:43:05.924292 kernel: QLogic iSCSI HBA Driver Sep 11 04:43:05.939353 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 04:43:05.958291 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 04:43:05.961178 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 04:43:06.003254 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 11 04:43:06.005182 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 11 04:43:06.062242 kernel: raid6: neonx8 gen() 15736 MB/s Sep 11 04:43:06.079241 kernel: raid6: neonx4 gen() 15782 MB/s Sep 11 04:43:06.096238 kernel: raid6: neonx2 gen() 13179 MB/s Sep 11 04:43:06.113244 kernel: raid6: neonx1 gen() 10447 MB/s Sep 11 04:43:06.130233 kernel: raid6: int64x8 gen() 6874 MB/s Sep 11 04:43:06.147234 kernel: raid6: int64x4 gen() 7330 MB/s Sep 11 04:43:06.164246 kernel: raid6: int64x2 gen() 6099 MB/s Sep 11 04:43:06.181248 kernel: raid6: int64x1 gen() 5055 MB/s Sep 11 04:43:06.181279 kernel: raid6: using algorithm neonx4 gen() 15782 MB/s Sep 11 04:43:06.198258 kernel: raid6: .... xor() 12298 MB/s, rmw enabled Sep 11 04:43:06.198286 kernel: raid6: using neon recovery algorithm Sep 11 04:43:06.203238 kernel: xor: measuring software checksum speed Sep 11 04:43:06.203254 kernel: 8regs : 21658 MB/sec Sep 11 04:43:06.204673 kernel: 32regs : 19344 MB/sec Sep 11 04:43:06.204685 kernel: arm64_neon : 28099 MB/sec Sep 11 04:43:06.204693 kernel: xor: using function: arm64_neon (28099 MB/sec) Sep 11 04:43:06.256244 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 11 04:43:06.261970 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 11 04:43:06.264152 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 04:43:06.286809 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 11 04:43:06.290829 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 04:43:06.292553 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 11 04:43:06.319927 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Sep 11 04:43:06.340074 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 04:43:06.342012 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 04:43:06.392791 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 04:43:06.396350 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 11 04:43:06.439993 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 11 04:43:06.441373 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 11 04:43:06.443362 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 11 04:43:06.443395 kernel: GPT:9289727 != 19775487 Sep 11 04:43:06.444578 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 11 04:43:06.445319 kernel: GPT:9289727 != 19775487 Sep 11 04:43:06.448239 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 11 04:43:06.448275 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 04:43:06.451910 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 04:43:06.452022 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 04:43:06.454906 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 04:43:06.457446 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 04:43:06.479595 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 11 04:43:06.480742 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 04:43:06.486705 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 11 04:43:06.494990 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 11 04:43:06.502588 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 04:43:06.508413 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 11 04:43:06.509298 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 11 04:43:06.515369 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 04:43:06.516246 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 04:43:06.517967 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 04:43:06.520080 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 11 04:43:06.521638 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 11 04:43:06.536489 disk-uuid[591]: Primary Header is updated. Sep 11 04:43:06.536489 disk-uuid[591]: Secondary Entries is updated. Sep 11 04:43:06.536489 disk-uuid[591]: Secondary Header is updated. Sep 11 04:43:06.540250 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 04:43:06.541847 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 11 04:43:07.549269 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 11 04:43:07.549924 disk-uuid[595]: The operation has completed successfully. Sep 11 04:43:07.577515 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 11 04:43:07.578432 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 11 04:43:07.597578 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 11 04:43:07.625087 sh[611]: Success Sep 11 04:43:07.637620 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 11 04:43:07.637672 kernel: device-mapper: uevent: version 1.0.3 Sep 11 04:43:07.637686 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 11 04:43:07.644232 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 11 04:43:07.667842 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 11 04:43:07.670242 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 11 04:43:07.685333 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 11 04:43:07.689233 kernel: BTRFS: device fsid cc572581-74a6-458b-8466-929342450ac1 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (623) Sep 11 04:43:07.690984 kernel: BTRFS info (device dm-0): first mount of filesystem cc572581-74a6-458b-8466-929342450ac1 Sep 11 04:43:07.691014 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 11 04:43:07.695232 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 11 04:43:07.695254 kernel: BTRFS info (device dm-0): enabling free space tree Sep 11 04:43:07.695786 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 11 04:43:07.696818 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 11 04:43:07.697930 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 11 04:43:07.698641 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 11 04:43:07.699935 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 11 04:43:07.723248 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (655) Sep 11 04:43:07.725252 kernel: BTRFS info (device vda6): first mount of filesystem e4784285-49b9-425c-84d2-205a3fc9fef8 Sep 11 04:43:07.725284 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 04:43:07.726804 kernel: BTRFS info (device vda6): turning on async discard Sep 11 04:43:07.726835 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 04:43:07.731241 kernel: BTRFS info (device vda6): last unmount of filesystem e4784285-49b9-425c-84d2-205a3fc9fef8 Sep 11 04:43:07.731955 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 11 04:43:07.734101 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 11 04:43:07.795257 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 04:43:07.799268 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 04:43:07.836633 ignition[697]: Ignition 2.22.0 Sep 11 04:43:07.836647 ignition[697]: Stage: fetch-offline Sep 11 04:43:07.836674 ignition[697]: no configs at "/usr/lib/ignition/base.d" Sep 11 04:43:07.838133 systemd-networkd[802]: lo: Link UP Sep 11 04:43:07.836681 ignition[697]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 04:43:07.838137 systemd-networkd[802]: lo: Gained carrier Sep 11 04:43:07.836754 ignition[697]: parsed url from cmdline: "" Sep 11 04:43:07.838882 systemd-networkd[802]: Enumeration completed Sep 11 04:43:07.836757 ignition[697]: no config URL provided Sep 11 04:43:07.839169 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 04:43:07.836762 ignition[697]: reading system config file "/usr/lib/ignition/user.ign" Sep 11 04:43:07.839653 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 04:43:07.836768 ignition[697]: no config at "/usr/lib/ignition/user.ign" Sep 11 04:43:07.839656 systemd-networkd[802]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 04:43:07.836784 ignition[697]: op(1): [started] loading QEMU firmware config module Sep 11 04:43:07.840611 systemd[1]: Reached target network.target - Network. Sep 11 04:43:07.836788 ignition[697]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 11 04:43:07.840815 systemd-networkd[802]: eth0: Link UP Sep 11 04:43:07.842123 ignition[697]: op(1): [finished] loading QEMU firmware config module Sep 11 04:43:07.840914 systemd-networkd[802]: eth0: Gained carrier Sep 11 04:43:07.840924 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 04:43:07.865268 systemd-networkd[802]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 04:43:07.890624 ignition[697]: parsing config with SHA512: 2fbb5afab81bb85e8745e3ffe033b88f3fc27dc53b265aca62357f3640e3d3fc623e857aa6e72650fb0336db76721876e46b196c215d7368523192e3f94afab8 Sep 11 04:43:07.895970 unknown[697]: fetched base config from "system" Sep 11 04:43:07.895982 unknown[697]: fetched user config from "qemu" Sep 11 04:43:07.896374 ignition[697]: fetch-offline: fetch-offline passed Sep 11 04:43:07.896425 ignition[697]: Ignition finished successfully Sep 11 04:43:07.898198 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 04:43:07.899371 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 11 04:43:07.900098 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 11 04:43:07.937203 ignition[812]: Ignition 2.22.0 Sep 11 04:43:07.937255 ignition[812]: Stage: kargs Sep 11 04:43:07.937382 ignition[812]: no configs at "/usr/lib/ignition/base.d" Sep 11 04:43:07.937391 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 04:43:07.938072 ignition[812]: kargs: kargs passed Sep 11 04:43:07.938110 ignition[812]: Ignition finished successfully Sep 11 04:43:07.942837 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 11 04:43:07.944615 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 11 04:43:07.976146 ignition[820]: Ignition 2.22.0 Sep 11 04:43:07.976166 ignition[820]: Stage: disks Sep 11 04:43:07.976329 ignition[820]: no configs at "/usr/lib/ignition/base.d" Sep 11 04:43:07.976338 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 04:43:07.977074 ignition[820]: disks: disks passed Sep 11 04:43:07.978523 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 11 04:43:07.977112 ignition[820]: Ignition finished successfully Sep 11 04:43:07.979482 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 11 04:43:07.980591 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 11 04:43:07.982070 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 04:43:07.983244 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 04:43:07.984728 systemd[1]: Reached target basic.target - Basic System. Sep 11 04:43:07.986891 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 11 04:43:08.016952 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 11 04:43:08.020639 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 11 04:43:08.022472 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 11 04:43:08.082228 kernel: EXT4-fs (vda9): mounted filesystem b90b6138-4768-44e9-aa3f-a0cdc2f28327 r/w with ordered data mode. Quota mode: none. Sep 11 04:43:08.082971 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 11 04:43:08.084023 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 11 04:43:08.086777 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 04:43:08.088663 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 11 04:43:08.089467 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 11 04:43:08.089504 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 11 04:43:08.089527 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 04:43:08.097644 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 11 04:43:08.099405 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 11 04:43:08.102266 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (840) Sep 11 04:43:08.104240 kernel: BTRFS info (device vda6): first mount of filesystem e4784285-49b9-425c-84d2-205a3fc9fef8 Sep 11 04:43:08.104266 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 04:43:08.106389 kernel: BTRFS info (device vda6): turning on async discard Sep 11 04:43:08.106416 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 04:43:08.107324 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 04:43:08.133336 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Sep 11 04:43:08.137292 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Sep 11 04:43:08.140771 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Sep 11 04:43:08.144427 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Sep 11 04:43:08.207968 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 11 04:43:08.209744 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 11 04:43:08.211894 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 11 04:43:08.228232 kernel: BTRFS info (device vda6): last unmount of filesystem e4784285-49b9-425c-84d2-205a3fc9fef8 Sep 11 04:43:08.236765 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 11 04:43:08.262455 ignition[954]: INFO : Ignition 2.22.0 Sep 11 04:43:08.262455 ignition[954]: INFO : Stage: mount Sep 11 04:43:08.264062 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 04:43:08.264062 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 04:43:08.264062 ignition[954]: INFO : mount: mount passed Sep 11 04:43:08.264062 ignition[954]: INFO : Ignition finished successfully Sep 11 04:43:08.265242 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 11 04:43:08.268011 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 11 04:43:08.816306 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 11 04:43:08.817725 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 11 04:43:08.846266 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (966) Sep 11 04:43:08.846320 kernel: BTRFS info (device vda6): first mount of filesystem e4784285-49b9-425c-84d2-205a3fc9fef8 Sep 11 04:43:08.847888 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 11 04:43:08.850262 kernel: BTRFS info (device vda6): turning on async discard Sep 11 04:43:08.850279 kernel: BTRFS info (device vda6): enabling free space tree Sep 11 04:43:08.851721 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 11 04:43:08.882648 ignition[983]: INFO : Ignition 2.22.0 Sep 11 04:43:08.882648 ignition[983]: INFO : Stage: files Sep 11 04:43:08.883937 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 04:43:08.883937 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 04:43:08.883937 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Sep 11 04:43:08.886666 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 11 04:43:08.886666 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 11 04:43:08.886666 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 11 04:43:08.886666 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 11 04:43:08.886666 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 11 04:43:08.886653 unknown[983]: wrote ssh authorized keys file for user: core Sep 11 04:43:08.893000 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 11 04:43:08.893000 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 11 04:43:08.937236 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 11 04:43:09.388066 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 11 04:43:09.389785 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 11 04:43:09.389785 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 11 04:43:09.405338 systemd-networkd[802]: eth0: Gained IPv6LL Sep 11 04:43:09.578776 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 11 04:43:09.742328 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 11 04:43:09.742328 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 11 04:43:09.746225 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 11 04:43:09.746225 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 11 04:43:09.746225 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 11 04:43:09.746225 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 04:43:09.746225 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 11 04:43:09.746225 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 04:43:09.746225 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 11 04:43:09.746225 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 04:43:09.746225 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 11 04:43:09.746225 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 11 04:43:09.746225 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 11 04:43:09.746225 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 11 04:43:09.762589 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 11 04:43:10.069133 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 11 04:43:10.521607 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 11 04:43:10.521607 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 11 04:43:10.524654 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 04:43:10.524654 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 11 04:43:10.524654 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 11 04:43:10.524654 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 11 04:43:10.524654 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 04:43:10.524654 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 11 04:43:10.524654 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 11 04:43:10.524654 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 11 04:43:10.538206 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 04:43:10.541944 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 11 04:43:10.543119 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 11 04:43:10.543119 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 11 04:43:10.543119 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 11 04:43:10.543119 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 11 04:43:10.543119 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 11 04:43:10.543119 ignition[983]: INFO : files: files passed Sep 11 04:43:10.543119 ignition[983]: INFO : Ignition finished successfully Sep 11 04:43:10.544741 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 11 04:43:10.548363 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 11 04:43:10.551351 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 11 04:43:10.571029 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 11 04:43:10.571904 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 11 04:43:10.573961 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Sep 11 04:43:10.575202 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 04:43:10.575202 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 11 04:43:10.577710 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 11 04:43:10.578073 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 04:43:10.580517 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 11 04:43:10.583350 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 11 04:43:10.612073 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 11 04:43:10.612172 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 11 04:43:10.613923 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 11 04:43:10.615253 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 11 04:43:10.616732 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 11 04:43:10.617408 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 11 04:43:10.648000 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 04:43:10.650055 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 11 04:43:10.670372 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 11 04:43:10.671288 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 04:43:10.672914 systemd[1]: Stopped target timers.target - Timer Units. Sep 11 04:43:10.674443 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 11 04:43:10.674549 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 11 04:43:10.676442 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 11 04:43:10.678028 systemd[1]: Stopped target basic.target - Basic System. Sep 11 04:43:10.679280 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 11 04:43:10.680688 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 11 04:43:10.682076 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 11 04:43:10.683589 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 11 04:43:10.685147 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 11 04:43:10.686589 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 11 04:43:10.688124 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 11 04:43:10.689941 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 11 04:43:10.691235 systemd[1]: Stopped target swap.target - Swaps. Sep 11 04:43:10.692415 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 11 04:43:10.692518 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 11 04:43:10.694285 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 11 04:43:10.695914 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 04:43:10.697355 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 11 04:43:10.698290 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 04:43:10.699794 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 11 04:43:10.699897 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 11 04:43:10.701993 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 11 04:43:10.702150 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 11 04:43:10.703585 systemd[1]: Stopped target paths.target - Path Units. Sep 11 04:43:10.704766 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 11 04:43:10.708308 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 04:43:10.709355 systemd[1]: Stopped target slices.target - Slice Units. Sep 11 04:43:10.710973 systemd[1]: Stopped target sockets.target - Socket Units. Sep 11 04:43:10.712128 systemd[1]: iscsid.socket: Deactivated successfully. Sep 11 04:43:10.712214 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 11 04:43:10.713378 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 11 04:43:10.713449 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 11 04:43:10.714593 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 11 04:43:10.714701 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 11 04:43:10.716082 systemd[1]: ignition-files.service: Deactivated successfully. Sep 11 04:43:10.716177 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 11 04:43:10.717943 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 11 04:43:10.719333 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 11 04:43:10.719449 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 04:43:10.728576 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 11 04:43:10.729207 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 11 04:43:10.729333 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 04:43:10.730867 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 11 04:43:10.730956 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 11 04:43:10.736343 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 11 04:43:10.736427 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 11 04:43:10.742357 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 11 04:43:10.743551 ignition[1039]: INFO : Ignition 2.22.0 Sep 11 04:43:10.743551 ignition[1039]: INFO : Stage: umount Sep 11 04:43:10.746275 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 11 04:43:10.746275 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 11 04:43:10.746275 ignition[1039]: INFO : umount: umount passed Sep 11 04:43:10.746275 ignition[1039]: INFO : Ignition finished successfully Sep 11 04:43:10.747312 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 11 04:43:10.747404 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 11 04:43:10.748816 systemd[1]: Stopped target network.target - Network. Sep 11 04:43:10.751303 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 11 04:43:10.751363 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 11 04:43:10.752571 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 11 04:43:10.752611 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 11 04:43:10.753983 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 11 04:43:10.754028 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 11 04:43:10.755333 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 11 04:43:10.755372 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 11 04:43:10.756671 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 11 04:43:10.758022 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 11 04:43:10.764193 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 11 04:43:10.765308 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 11 04:43:10.768247 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 11 04:43:10.768688 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 11 04:43:10.768726 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 04:43:10.772529 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 11 04:43:10.772735 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 11 04:43:10.772817 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 11 04:43:10.775716 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 11 04:43:10.776086 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 11 04:43:10.777653 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 11 04:43:10.777688 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 11 04:43:10.779774 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 11 04:43:10.781476 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 11 04:43:10.781534 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 11 04:43:10.783069 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 04:43:10.783109 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 04:43:10.785397 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 11 04:43:10.785441 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 11 04:43:10.787054 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 04:43:10.790682 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 11 04:43:10.802034 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 11 04:43:10.802154 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 11 04:43:10.804665 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 11 04:43:10.804808 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 04:43:10.806478 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 11 04:43:10.806558 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 11 04:43:10.808084 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 11 04:43:10.808122 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 11 04:43:10.809292 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 11 04:43:10.809318 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 04:43:10.810596 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 11 04:43:10.810638 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 11 04:43:10.812886 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 11 04:43:10.812957 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 11 04:43:10.814928 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 11 04:43:10.814968 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 11 04:43:10.817259 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 11 04:43:10.817307 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 11 04:43:10.819338 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 11 04:43:10.820093 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 11 04:43:10.820144 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 04:43:10.822631 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 11 04:43:10.822670 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 04:43:10.824956 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 11 04:43:10.824995 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 04:43:10.840740 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 11 04:43:10.840830 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 11 04:43:10.842510 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 11 04:43:10.844506 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 11 04:43:10.852543 systemd[1]: Switching root. Sep 11 04:43:10.888184 systemd-journald[244]: Journal stopped Sep 11 04:43:11.614763 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 11 04:43:11.614815 kernel: SELinux: policy capability network_peer_controls=1 Sep 11 04:43:11.614833 kernel: SELinux: policy capability open_perms=1 Sep 11 04:43:11.614842 kernel: SELinux: policy capability extended_socket_class=1 Sep 11 04:43:11.614851 kernel: SELinux: policy capability always_check_network=0 Sep 11 04:43:11.614860 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 11 04:43:11.614875 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 11 04:43:11.614887 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 11 04:43:11.614896 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 11 04:43:11.614905 kernel: SELinux: policy capability userspace_initial_context=0 Sep 11 04:43:11.614914 kernel: audit: type=1403 audit(1757565791.080:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 11 04:43:11.614928 systemd[1]: Successfully loaded SELinux policy in 57.707ms. Sep 11 04:43:11.614944 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.124ms. Sep 11 04:43:11.614956 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 11 04:43:11.614967 systemd[1]: Detected virtualization kvm. Sep 11 04:43:11.614978 systemd[1]: Detected architecture arm64. Sep 11 04:43:11.614988 systemd[1]: Detected first boot. Sep 11 04:43:11.614998 systemd[1]: Initializing machine ID from VM UUID. Sep 11 04:43:11.615009 zram_generator::config[1084]: No configuration found. Sep 11 04:43:11.615024 kernel: NET: Registered PF_VSOCK protocol family Sep 11 04:43:11.615035 systemd[1]: Populated /etc with preset unit settings. Sep 11 04:43:11.615046 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 11 04:43:11.615057 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 11 04:43:11.615067 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 11 04:43:11.615078 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 11 04:43:11.615089 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 11 04:43:11.615099 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 11 04:43:11.615110 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 11 04:43:11.615122 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 11 04:43:11.615132 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 11 04:43:11.615143 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 11 04:43:11.615154 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 11 04:43:11.615165 systemd[1]: Created slice user.slice - User and Session Slice. Sep 11 04:43:11.615175 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 11 04:43:11.615186 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 11 04:43:11.615204 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 11 04:43:11.615230 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 11 04:43:11.615244 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 11 04:43:11.615255 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 11 04:43:11.615265 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 11 04:43:11.615276 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 11 04:43:11.615286 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 11 04:43:11.615296 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 11 04:43:11.615307 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 11 04:43:11.615318 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 11 04:43:11.615329 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 11 04:43:11.615340 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 11 04:43:11.615350 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 11 04:43:11.615360 systemd[1]: Reached target slices.target - Slice Units. Sep 11 04:43:11.615371 systemd[1]: Reached target swap.target - Swaps. Sep 11 04:43:11.615382 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 11 04:43:11.615393 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 11 04:43:11.615407 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 11 04:43:11.615417 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 11 04:43:11.615457 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 11 04:43:11.615500 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 11 04:43:11.615510 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 11 04:43:11.615521 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 11 04:43:11.615531 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 11 04:43:11.615543 systemd[1]: Mounting media.mount - External Media Directory... Sep 11 04:43:11.615553 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 11 04:43:11.615564 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 11 04:43:11.615574 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 11 04:43:11.615587 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 11 04:43:11.615598 systemd[1]: Reached target machines.target - Containers. Sep 11 04:43:11.615609 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 11 04:43:11.615620 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 04:43:11.615631 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 11 04:43:11.615641 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 11 04:43:11.615652 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 04:43:11.615663 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 04:43:11.615674 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 04:43:11.615685 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 11 04:43:11.615695 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 04:43:11.615706 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 11 04:43:11.615717 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 11 04:43:11.615727 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 11 04:43:11.615738 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 11 04:43:11.615748 kernel: fuse: init (API version 7.41) Sep 11 04:43:11.615758 systemd[1]: Stopped systemd-fsck-usr.service. Sep 11 04:43:11.615770 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 04:43:11.615781 kernel: loop: module loaded Sep 11 04:43:11.615790 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 11 04:43:11.615801 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 11 04:43:11.615812 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 11 04:43:11.615822 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 11 04:43:11.615833 kernel: ACPI: bus type drm_connector registered Sep 11 04:43:11.615842 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 11 04:43:11.615854 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 11 04:43:11.615870 systemd[1]: verity-setup.service: Deactivated successfully. Sep 11 04:43:11.615881 systemd[1]: Stopped verity-setup.service. Sep 11 04:43:11.615892 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 11 04:43:11.615902 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 11 04:43:11.615935 systemd-journald[1159]: Collecting audit messages is disabled. Sep 11 04:43:11.615959 systemd[1]: Mounted media.mount - External Media Directory. Sep 11 04:43:11.615970 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 11 04:43:11.615982 systemd-journald[1159]: Journal started Sep 11 04:43:11.616002 systemd-journald[1159]: Runtime Journal (/run/log/journal/602eaed7ffb548059d1262e7ac39b8c9) is 6M, max 48.5M, 42.4M free. Sep 11 04:43:11.419978 systemd[1]: Queued start job for default target multi-user.target. Sep 11 04:43:11.440131 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 11 04:43:11.440518 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 11 04:43:11.618494 systemd[1]: Started systemd-journald.service - Journal Service. Sep 11 04:43:11.619060 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 11 04:43:11.620067 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 11 04:43:11.621114 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 11 04:43:11.622378 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 11 04:43:11.623594 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 11 04:43:11.623752 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 11 04:43:11.624924 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 04:43:11.625089 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 04:43:11.626314 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 04:43:11.626480 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 04:43:11.627486 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 04:43:11.627641 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 04:43:11.628871 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 11 04:43:11.629032 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 11 04:43:11.630155 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 04:43:11.630366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 04:43:11.631471 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 11 04:43:11.632757 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 11 04:43:11.634003 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 11 04:43:11.635376 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 11 04:43:11.646558 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 11 04:43:11.648836 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 11 04:43:11.650669 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 11 04:43:11.651571 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 11 04:43:11.651599 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 11 04:43:11.653156 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 11 04:43:11.659028 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 11 04:43:11.660346 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 04:43:11.661504 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 11 04:43:11.663345 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 11 04:43:11.664473 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 04:43:11.666352 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 11 04:43:11.667438 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 04:43:11.668290 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 04:43:11.670257 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 11 04:43:11.673051 systemd-journald[1159]: Time spent on flushing to /var/log/journal/602eaed7ffb548059d1262e7ac39b8c9 is 21.116ms for 888 entries. Sep 11 04:43:11.673051 systemd-journald[1159]: System Journal (/var/log/journal/602eaed7ffb548059d1262e7ac39b8c9) is 8M, max 195.6M, 187.6M free. Sep 11 04:43:11.700613 systemd-journald[1159]: Received client request to flush runtime journal. Sep 11 04:43:11.700651 kernel: loop0: detected capacity change from 0 to 100632 Sep 11 04:43:11.674498 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 11 04:43:11.676978 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 11 04:43:11.678395 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 11 04:43:11.679596 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 11 04:43:11.691971 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 11 04:43:11.693606 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 11 04:43:11.698078 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 11 04:43:11.703443 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 11 04:43:11.703388 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 11 04:43:11.705229 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 04:43:11.717380 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 11 04:43:11.719620 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 11 04:43:11.727277 kernel: loop1: detected capacity change from 0 to 207008 Sep 11 04:43:11.732736 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 11 04:43:11.745968 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Sep 11 04:43:11.746284 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Sep 11 04:43:11.750295 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 11 04:43:11.759248 kernel: loop2: detected capacity change from 0 to 119368 Sep 11 04:43:11.800266 kernel: loop3: detected capacity change from 0 to 100632 Sep 11 04:43:11.806244 kernel: loop4: detected capacity change from 0 to 207008 Sep 11 04:43:11.811240 kernel: loop5: detected capacity change from 0 to 119368 Sep 11 04:43:11.814955 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 11 04:43:11.815425 (sd-merge)[1222]: Merged extensions into '/usr'. Sep 11 04:43:11.820741 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Sep 11 04:43:11.820770 systemd[1]: Reloading... Sep 11 04:43:11.866277 zram_generator::config[1245]: No configuration found. Sep 11 04:43:11.936495 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 11 04:43:12.019113 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 11 04:43:12.019379 systemd[1]: Reloading finished in 198 ms. Sep 11 04:43:12.033715 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 11 04:43:12.036247 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 11 04:43:12.048313 systemd[1]: Starting ensure-sysext.service... Sep 11 04:43:12.049847 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 11 04:43:12.058643 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Sep 11 04:43:12.058656 systemd[1]: Reloading... Sep 11 04:43:12.063205 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 11 04:43:12.063581 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 11 04:43:12.063861 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 11 04:43:12.064175 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 11 04:43:12.064898 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 11 04:43:12.065212 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Sep 11 04:43:12.065338 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Sep 11 04:43:12.068318 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 04:43:12.068411 systemd-tmpfiles[1284]: Skipping /boot Sep 11 04:43:12.074354 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Sep 11 04:43:12.074441 systemd-tmpfiles[1284]: Skipping /boot Sep 11 04:43:12.105259 zram_generator::config[1311]: No configuration found. Sep 11 04:43:12.238713 systemd[1]: Reloading finished in 179 ms. Sep 11 04:43:12.264760 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 11 04:43:12.271440 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 11 04:43:12.281263 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 04:43:12.283279 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 11 04:43:12.285427 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 11 04:43:12.287948 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 11 04:43:12.293231 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 11 04:43:12.295776 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 11 04:43:12.301313 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 11 04:43:12.304475 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 04:43:12.305657 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 04:43:12.308754 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 04:43:12.312429 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 11 04:43:12.313384 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 04:43:12.313541 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 04:43:12.319576 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 11 04:43:12.321133 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 04:43:12.321314 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 04:43:12.322738 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 04:43:12.322914 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 04:43:12.328954 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 11 04:43:12.332471 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 11 04:43:12.333735 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 11 04:43:12.334436 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Sep 11 04:43:12.336731 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 11 04:43:12.344432 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 11 04:43:12.345630 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 11 04:43:12.345743 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 11 04:43:12.346948 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 11 04:43:12.349362 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 11 04:43:12.350311 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 11 04:43:12.354159 systemd[1]: Finished ensure-sysext.service. Sep 11 04:43:12.355700 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 11 04:43:12.356304 augenrules[1386]: No rules Sep 11 04:43:12.356317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 11 04:43:12.358284 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 04:43:12.358753 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 04:43:12.360053 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 11 04:43:12.360291 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 11 04:43:12.361658 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 11 04:43:12.361801 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 11 04:43:12.363214 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 11 04:43:12.364497 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 11 04:43:12.365599 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 11 04:43:12.375443 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 11 04:43:12.376461 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 11 04:43:12.376534 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 11 04:43:12.378309 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 11 04:43:12.379388 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 11 04:43:12.379530 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 11 04:43:12.424879 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 11 04:43:12.461586 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 11 04:43:12.465187 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 11 04:43:12.490512 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 11 04:43:12.534235 systemd-networkd[1417]: lo: Link UP Sep 11 04:43:12.534243 systemd-networkd[1417]: lo: Gained carrier Sep 11 04:43:12.534976 systemd-networkd[1417]: Enumeration completed Sep 11 04:43:12.535091 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 11 04:43:12.535553 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 04:43:12.535563 systemd-networkd[1417]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 11 04:43:12.536073 systemd-networkd[1417]: eth0: Link UP Sep 11 04:43:12.536178 systemd-networkd[1417]: eth0: Gained carrier Sep 11 04:43:12.536206 systemd-networkd[1417]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 11 04:43:12.538476 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 11 04:43:12.542455 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 11 04:43:12.544862 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 11 04:43:12.545117 systemd-resolved[1351]: Positive Trust Anchors: Sep 11 04:43:12.545138 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 11 04:43:12.545169 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 11 04:43:12.546148 systemd[1]: Reached target time-set.target - System Time Set. Sep 11 04:43:12.552779 systemd-resolved[1351]: Defaulting to hostname 'linux'. Sep 11 04:43:12.553288 systemd-networkd[1417]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 11 04:43:12.554711 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 11 04:43:12.555459 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Sep 11 04:43:12.556493 systemd[1]: Reached target network.target - Network. Sep 11 04:43:12.557144 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 11 04:43:12.559015 systemd[1]: Reached target sysinit.target - System Initialization. Sep 11 04:43:12.559077 systemd-timesyncd[1422]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 11 04:43:12.559119 systemd-timesyncd[1422]: Initial clock synchronization to Thu 2025-09-11 04:43:12.248840 UTC. Sep 11 04:43:12.559935 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 11 04:43:12.561182 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 11 04:43:12.562395 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 11 04:43:12.564427 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 11 04:43:12.565375 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 11 04:43:12.566240 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 11 04:43:12.566268 systemd[1]: Reached target paths.target - Path Units. Sep 11 04:43:12.566902 systemd[1]: Reached target timers.target - Timer Units. Sep 11 04:43:12.568152 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 11 04:43:12.571107 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 11 04:43:12.573496 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 11 04:43:12.574605 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 11 04:43:12.575630 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 11 04:43:12.579840 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 11 04:43:12.581619 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 11 04:43:12.584260 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 11 04:43:12.585326 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 11 04:43:12.595412 systemd[1]: Reached target sockets.target - Socket Units. Sep 11 04:43:12.596146 systemd[1]: Reached target basic.target - Basic System. Sep 11 04:43:12.597025 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 11 04:43:12.597059 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 11 04:43:12.597997 systemd[1]: Starting containerd.service - containerd container runtime... Sep 11 04:43:12.600149 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 11 04:43:12.601951 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 11 04:43:12.606999 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 11 04:43:12.608812 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 11 04:43:12.609632 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 11 04:43:12.611390 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 11 04:43:12.612999 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 11 04:43:12.613476 jq[1467]: false Sep 11 04:43:12.614699 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 11 04:43:12.616773 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 11 04:43:12.620930 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 11 04:43:12.622606 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 11 04:43:12.624354 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 11 04:43:12.624778 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 11 04:43:12.626303 systemd[1]: Starting update-engine.service - Update Engine... Sep 11 04:43:12.626666 extend-filesystems[1468]: Found /dev/vda6 Sep 11 04:43:12.632349 extend-filesystems[1468]: Found /dev/vda9 Sep 11 04:43:12.627826 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 11 04:43:12.634442 extend-filesystems[1468]: Checking size of /dev/vda9 Sep 11 04:43:12.635706 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 11 04:43:12.638326 jq[1486]: true Sep 11 04:43:12.640115 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 11 04:43:12.640388 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 11 04:43:12.640678 systemd[1]: motdgen.service: Deactivated successfully. Sep 11 04:43:12.640892 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 11 04:43:12.654412 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 11 04:43:12.655343 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 11 04:43:12.658650 extend-filesystems[1468]: Resized partition /dev/vda9 Sep 11 04:43:12.663915 extend-filesystems[1507]: resize2fs 1.47.3 (8-Jul-2025) Sep 11 04:43:12.665673 update_engine[1480]: I20250911 04:43:12.665023 1480 main.cc:92] Flatcar Update Engine starting Sep 11 04:43:12.666095 tar[1494]: linux-arm64/LICENSE Sep 11 04:43:12.666095 tar[1494]: linux-arm64/helm Sep 11 04:43:12.668251 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 11 04:43:12.669489 jq[1496]: true Sep 11 04:43:12.679961 dbus-daemon[1465]: [system] SELinux support is enabled Sep 11 04:43:12.680214 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 11 04:43:12.681760 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 11 04:43:12.683614 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 11 04:43:12.683649 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 11 04:43:12.685412 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 11 04:43:12.685435 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 11 04:43:12.698337 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 11 04:43:12.712710 extend-filesystems[1507]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 11 04:43:12.712710 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 11 04:43:12.712710 extend-filesystems[1507]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 11 04:43:12.717111 extend-filesystems[1468]: Resized filesystem in /dev/vda9 Sep 11 04:43:12.714551 systemd[1]: Started update-engine.service - Update Engine. Sep 11 04:43:12.721023 update_engine[1480]: I20250911 04:43:12.714612 1480 update_check_scheduler.cc:74] Next update check in 6m18s Sep 11 04:43:12.716185 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 11 04:43:12.716589 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 11 04:43:12.721762 systemd-logind[1476]: Watching system buttons on /dev/input/event0 (Power Button) Sep 11 04:43:12.723591 systemd-logind[1476]: New seat seat0. Sep 11 04:43:12.728676 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 11 04:43:12.729727 systemd[1]: Started systemd-logind.service - User Login Management. Sep 11 04:43:12.730862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 11 04:43:12.760071 bash[1533]: Updated "/home/core/.ssh/authorized_keys" Sep 11 04:43:12.764563 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 11 04:43:12.766133 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 11 04:43:12.802384 locksmithd[1520]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 11 04:43:12.871334 containerd[1511]: time="2025-09-11T04:43:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 11 04:43:12.873678 containerd[1511]: time="2025-09-11T04:43:12.873637320Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 11 04:43:12.884240 containerd[1511]: time="2025-09-11T04:43:12.884085720Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.96µs" Sep 11 04:43:12.884240 containerd[1511]: time="2025-09-11T04:43:12.884121960Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 11 04:43:12.884240 containerd[1511]: time="2025-09-11T04:43:12.884138920Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 11 04:43:12.884471 containerd[1511]: time="2025-09-11T04:43:12.884446560Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 11 04:43:12.884535 containerd[1511]: time="2025-09-11T04:43:12.884522240Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 11 04:43:12.884607 containerd[1511]: time="2025-09-11T04:43:12.884593880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 04:43:12.884732 containerd[1511]: time="2025-09-11T04:43:12.884711840Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 11 04:43:12.884790 containerd[1511]: time="2025-09-11T04:43:12.884775480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 04:43:12.885063 containerd[1511]: time="2025-09-11T04:43:12.885036640Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 11 04:43:12.885127 containerd[1511]: time="2025-09-11T04:43:12.885112960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 04:43:12.885184 containerd[1511]: time="2025-09-11T04:43:12.885172160Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 11 04:43:12.885286 containerd[1511]: time="2025-09-11T04:43:12.885269760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 11 04:43:12.885419 containerd[1511]: time="2025-09-11T04:43:12.885400160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 11 04:43:12.885669 containerd[1511]: time="2025-09-11T04:43:12.885643000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 04:43:12.885767 containerd[1511]: time="2025-09-11T04:43:12.885749920Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 11 04:43:12.885814 containerd[1511]: time="2025-09-11T04:43:12.885802960Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 11 04:43:12.885898 containerd[1511]: time="2025-09-11T04:43:12.885881320Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 11 04:43:12.886251 containerd[1511]: time="2025-09-11T04:43:12.886177600Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 11 04:43:12.886307 containerd[1511]: time="2025-09-11T04:43:12.886294120Z" level=info msg="metadata content store policy set" policy=shared Sep 11 04:43:12.890067 containerd[1511]: time="2025-09-11T04:43:12.890037600Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 11 04:43:12.890122 containerd[1511]: time="2025-09-11T04:43:12.890090120Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 11 04:43:12.890122 containerd[1511]: time="2025-09-11T04:43:12.890104960Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 11 04:43:12.890122 containerd[1511]: time="2025-09-11T04:43:12.890116600Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 11 04:43:12.890186 containerd[1511]: time="2025-09-11T04:43:12.890129680Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 11 04:43:12.890186 containerd[1511]: time="2025-09-11T04:43:12.890140840Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 11 04:43:12.890186 containerd[1511]: time="2025-09-11T04:43:12.890152080Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 11 04:43:12.890186 containerd[1511]: time="2025-09-11T04:43:12.890163200Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 11 04:43:12.890186 containerd[1511]: time="2025-09-11T04:43:12.890174080Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 11 04:43:12.890186 containerd[1511]: time="2025-09-11T04:43:12.890183960Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 11 04:43:12.890304 containerd[1511]: time="2025-09-11T04:43:12.890199640Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 11 04:43:12.890304 containerd[1511]: time="2025-09-11T04:43:12.890212360Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 11 04:43:12.890379 containerd[1511]: time="2025-09-11T04:43:12.890335760Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 11 04:43:12.890406 containerd[1511]: time="2025-09-11T04:43:12.890365280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 11 04:43:12.890424 containerd[1511]: time="2025-09-11T04:43:12.890407400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 11 04:43:12.890424 containerd[1511]: time="2025-09-11T04:43:12.890418920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 11 04:43:12.890456 containerd[1511]: time="2025-09-11T04:43:12.890429080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 11 04:43:12.890456 containerd[1511]: time="2025-09-11T04:43:12.890439800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 11 04:43:12.890456 containerd[1511]: time="2025-09-11T04:43:12.890450480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 11 04:43:12.890511 containerd[1511]: time="2025-09-11T04:43:12.890459720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 11 04:43:12.890511 containerd[1511]: time="2025-09-11T04:43:12.890471440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 11 04:43:12.890511 containerd[1511]: time="2025-09-11T04:43:12.890482800Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 11 04:43:12.890511 containerd[1511]: time="2025-09-11T04:43:12.890492320Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 11 04:43:12.890683 containerd[1511]: time="2025-09-11T04:43:12.890664800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 11 04:43:12.890703 containerd[1511]: time="2025-09-11T04:43:12.890684400Z" level=info msg="Start snapshots syncer" Sep 11 04:43:12.890737 containerd[1511]: time="2025-09-11T04:43:12.890710880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 11 04:43:12.890938 containerd[1511]: time="2025-09-11T04:43:12.890904720Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 11 04:43:12.891042 containerd[1511]: time="2025-09-11T04:43:12.890953040Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 11 04:43:12.891042 containerd[1511]: time="2025-09-11T04:43:12.891014040Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 11 04:43:12.891183 containerd[1511]: time="2025-09-11T04:43:12.891114800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 11 04:43:12.891183 containerd[1511]: time="2025-09-11T04:43:12.891143480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 11 04:43:12.891183 containerd[1511]: time="2025-09-11T04:43:12.891154560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 11 04:43:12.891183 containerd[1511]: time="2025-09-11T04:43:12.891166440Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 11 04:43:12.891183 containerd[1511]: time="2025-09-11T04:43:12.891178480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 11 04:43:12.891303 containerd[1511]: time="2025-09-11T04:43:12.891197640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 11 04:43:12.891303 containerd[1511]: time="2025-09-11T04:43:12.891211240Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 11 04:43:12.891303 containerd[1511]: time="2025-09-11T04:43:12.891256760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 11 04:43:12.891303 containerd[1511]: time="2025-09-11T04:43:12.891269480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 11 04:43:12.891303 containerd[1511]: time="2025-09-11T04:43:12.891280640Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 11 04:43:12.891387 containerd[1511]: time="2025-09-11T04:43:12.891311920Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 04:43:12.891387 containerd[1511]: time="2025-09-11T04:43:12.891326840Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 11 04:43:12.891387 containerd[1511]: time="2025-09-11T04:43:12.891335320Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 04:43:12.891387 containerd[1511]: time="2025-09-11T04:43:12.891343960Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 11 04:43:12.891387 containerd[1511]: time="2025-09-11T04:43:12.891351320Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 11 04:43:12.891387 containerd[1511]: time="2025-09-11T04:43:12.891361680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 11 04:43:12.891387 containerd[1511]: time="2025-09-11T04:43:12.891371320Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 11 04:43:12.891496 containerd[1511]: time="2025-09-11T04:43:12.891446440Z" level=info msg="runtime interface created" Sep 11 04:43:12.891496 containerd[1511]: time="2025-09-11T04:43:12.891451880Z" level=info msg="created NRI interface" Sep 11 04:43:12.891496 containerd[1511]: time="2025-09-11T04:43:12.891459080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 11 04:43:12.891496 containerd[1511]: time="2025-09-11T04:43:12.891468840Z" level=info msg="Connect containerd service" Sep 11 04:43:12.891556 containerd[1511]: time="2025-09-11T04:43:12.891497400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 11 04:43:12.893672 containerd[1511]: time="2025-09-11T04:43:12.893173760Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 04:43:12.959836 containerd[1511]: time="2025-09-11T04:43:12.959739240Z" level=info msg="Start subscribing containerd event" Sep 11 04:43:12.959836 containerd[1511]: time="2025-09-11T04:43:12.959794880Z" level=info msg="Start recovering state" Sep 11 04:43:12.960032 containerd[1511]: time="2025-09-11T04:43:12.959947800Z" level=info msg="Start event monitor" Sep 11 04:43:12.960032 containerd[1511]: time="2025-09-11T04:43:12.959971880Z" level=info msg="Start cni network conf syncer for default" Sep 11 04:43:12.960032 containerd[1511]: time="2025-09-11T04:43:12.959980040Z" level=info msg="Start streaming server" Sep 11 04:43:12.960032 containerd[1511]: time="2025-09-11T04:43:12.959988640Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 11 04:43:12.960104 containerd[1511]: time="2025-09-11T04:43:12.960038360Z" level=info msg="runtime interface starting up..." Sep 11 04:43:12.960104 containerd[1511]: time="2025-09-11T04:43:12.960044360Z" level=info msg="starting plugins..." Sep 11 04:43:12.960104 containerd[1511]: time="2025-09-11T04:43:12.960060040Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 11 04:43:12.960397 containerd[1511]: time="2025-09-11T04:43:12.960373880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 11 04:43:12.960492 containerd[1511]: time="2025-09-11T04:43:12.960479120Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 11 04:43:12.960603 containerd[1511]: time="2025-09-11T04:43:12.960589160Z" level=info msg="containerd successfully booted in 0.089627s" Sep 11 04:43:12.960684 systemd[1]: Started containerd.service - containerd container runtime. Sep 11 04:43:13.001743 tar[1494]: linux-arm64/README.md Sep 11 04:43:13.018061 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 11 04:43:13.543490 sshd_keygen[1492]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 11 04:43:13.561193 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 11 04:43:13.563825 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 11 04:43:13.579226 systemd[1]: issuegen.service: Deactivated successfully. Sep 11 04:43:13.579436 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 11 04:43:13.581508 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 11 04:43:13.604291 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 11 04:43:13.606466 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 11 04:43:13.608195 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 11 04:43:13.609164 systemd[1]: Reached target getty.target - Login Prompts. Sep 11 04:43:14.013361 systemd-networkd[1417]: eth0: Gained IPv6LL Sep 11 04:43:14.017264 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 11 04:43:14.019005 systemd[1]: Reached target network-online.target - Network is Online. Sep 11 04:43:14.021391 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 11 04:43:14.023564 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 04:43:14.026346 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 11 04:43:14.050154 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 11 04:43:14.051625 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 11 04:43:14.053030 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 11 04:43:14.055424 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 11 04:43:14.546492 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 04:43:14.547737 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 11 04:43:14.550690 (kubelet)[1604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 04:43:14.552302 systemd[1]: Startup finished in 2.000s (kernel) + 5.471s (initrd) + 3.530s (userspace) = 11.002s. Sep 11 04:43:14.866908 kubelet[1604]: E0911 04:43:14.866837 1604 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 04:43:14.869044 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 04:43:14.869173 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 04:43:14.870374 systemd[1]: kubelet.service: Consumed 732ms CPU time, 256.7M memory peak. Sep 11 04:43:17.970562 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 11 04:43:17.971674 systemd[1]: Started sshd@0-10.0.0.77:22-10.0.0.1:46074.service - OpenSSH per-connection server daemon (10.0.0.1:46074). Sep 11 04:43:18.064672 sshd[1618]: Accepted publickey for core from 10.0.0.1 port 46074 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:43:18.066415 sshd-session[1618]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:43:18.072416 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 11 04:43:18.073281 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 11 04:43:18.078436 systemd-logind[1476]: New session 1 of user core. Sep 11 04:43:18.096459 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 11 04:43:18.098968 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 11 04:43:18.120269 (systemd)[1623]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 11 04:43:18.122297 systemd-logind[1476]: New session c1 of user core. Sep 11 04:43:18.227747 systemd[1623]: Queued start job for default target default.target. Sep 11 04:43:18.239165 systemd[1623]: Created slice app.slice - User Application Slice. Sep 11 04:43:18.239195 systemd[1623]: Reached target paths.target - Paths. Sep 11 04:43:18.239261 systemd[1623]: Reached target timers.target - Timers. Sep 11 04:43:18.240449 systemd[1623]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 11 04:43:18.249811 systemd[1623]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 11 04:43:18.249875 systemd[1623]: Reached target sockets.target - Sockets. Sep 11 04:43:18.249914 systemd[1623]: Reached target basic.target - Basic System. Sep 11 04:43:18.249943 systemd[1623]: Reached target default.target - Main User Target. Sep 11 04:43:18.249974 systemd[1623]: Startup finished in 122ms. Sep 11 04:43:18.250054 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 11 04:43:18.251454 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 11 04:43:18.315891 systemd[1]: Started sshd@1-10.0.0.77:22-10.0.0.1:46076.service - OpenSSH per-connection server daemon (10.0.0.1:46076). Sep 11 04:43:18.377579 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 46076 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:43:18.378622 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:43:18.382709 systemd-logind[1476]: New session 2 of user core. Sep 11 04:43:18.400430 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 11 04:43:18.449966 sshd[1638]: Connection closed by 10.0.0.1 port 46076 Sep 11 04:43:18.450383 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Sep 11 04:43:18.460019 systemd[1]: sshd@1-10.0.0.77:22-10.0.0.1:46076.service: Deactivated successfully. Sep 11 04:43:18.462091 systemd[1]: session-2.scope: Deactivated successfully. Sep 11 04:43:18.462788 systemd-logind[1476]: Session 2 logged out. Waiting for processes to exit. Sep 11 04:43:18.464944 systemd[1]: Started sshd@2-10.0.0.77:22-10.0.0.1:46090.service - OpenSSH per-connection server daemon (10.0.0.1:46090). Sep 11 04:43:18.465519 systemd-logind[1476]: Removed session 2. Sep 11 04:43:18.504067 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 46090 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:43:18.505208 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:43:18.509004 systemd-logind[1476]: New session 3 of user core. Sep 11 04:43:18.524395 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 11 04:43:18.570577 sshd[1647]: Connection closed by 10.0.0.1 port 46090 Sep 11 04:43:18.571010 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Sep 11 04:43:18.592226 systemd[1]: sshd@2-10.0.0.77:22-10.0.0.1:46090.service: Deactivated successfully. Sep 11 04:43:18.594477 systemd[1]: session-3.scope: Deactivated successfully. Sep 11 04:43:18.595567 systemd-logind[1476]: Session 3 logged out. Waiting for processes to exit. Sep 11 04:43:18.597128 systemd[1]: Started sshd@3-10.0.0.77:22-10.0.0.1:46096.service - OpenSSH per-connection server daemon (10.0.0.1:46096). Sep 11 04:43:18.598499 systemd-logind[1476]: Removed session 3. Sep 11 04:43:18.652908 sshd[1653]: Accepted publickey for core from 10.0.0.1 port 46096 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:43:18.654032 sshd-session[1653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:43:18.658295 systemd-logind[1476]: New session 4 of user core. Sep 11 04:43:18.677369 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 11 04:43:18.727860 sshd[1656]: Connection closed by 10.0.0.1 port 46096 Sep 11 04:43:18.728192 sshd-session[1653]: pam_unix(sshd:session): session closed for user core Sep 11 04:43:18.746260 systemd[1]: sshd@3-10.0.0.77:22-10.0.0.1:46096.service: Deactivated successfully. Sep 11 04:43:18.748525 systemd[1]: session-4.scope: Deactivated successfully. Sep 11 04:43:18.749175 systemd-logind[1476]: Session 4 logged out. Waiting for processes to exit. Sep 11 04:43:18.751293 systemd[1]: Started sshd@4-10.0.0.77:22-10.0.0.1:46112.service - OpenSSH per-connection server daemon (10.0.0.1:46112). Sep 11 04:43:18.752182 systemd-logind[1476]: Removed session 4. Sep 11 04:43:18.791303 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 46112 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:43:18.792478 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:43:18.796760 systemd-logind[1476]: New session 5 of user core. Sep 11 04:43:18.803369 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 11 04:43:18.857414 sudo[1666]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 11 04:43:18.857669 sudo[1666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 04:43:18.868003 sudo[1666]: pam_unix(sudo:session): session closed for user root Sep 11 04:43:18.869498 sshd[1665]: Connection closed by 10.0.0.1 port 46112 Sep 11 04:43:18.869860 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Sep 11 04:43:18.883164 systemd[1]: sshd@4-10.0.0.77:22-10.0.0.1:46112.service: Deactivated successfully. Sep 11 04:43:18.885663 systemd[1]: session-5.scope: Deactivated successfully. Sep 11 04:43:18.886769 systemd-logind[1476]: Session 5 logged out. Waiting for processes to exit. Sep 11 04:43:18.888608 systemd-logind[1476]: Removed session 5. Sep 11 04:43:18.890323 systemd[1]: Started sshd@5-10.0.0.77:22-10.0.0.1:46124.service - OpenSSH per-connection server daemon (10.0.0.1:46124). Sep 11 04:43:18.943187 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 46124 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:43:18.944401 sshd-session[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:43:18.948268 systemd-logind[1476]: New session 6 of user core. Sep 11 04:43:18.958375 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 11 04:43:19.007775 sudo[1677]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 11 04:43:19.008034 sudo[1677]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 04:43:19.012352 sudo[1677]: pam_unix(sudo:session): session closed for user root Sep 11 04:43:19.016689 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 11 04:43:19.016918 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 04:43:19.025347 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 11 04:43:19.059959 augenrules[1699]: No rules Sep 11 04:43:19.060573 systemd[1]: audit-rules.service: Deactivated successfully. Sep 11 04:43:19.062265 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 11 04:43:19.063079 sudo[1676]: pam_unix(sudo:session): session closed for user root Sep 11 04:43:19.065067 sshd[1675]: Connection closed by 10.0.0.1 port 46124 Sep 11 04:43:19.065241 sshd-session[1672]: pam_unix(sshd:session): session closed for user core Sep 11 04:43:19.074977 systemd[1]: sshd@5-10.0.0.77:22-10.0.0.1:46124.service: Deactivated successfully. Sep 11 04:43:19.077389 systemd[1]: session-6.scope: Deactivated successfully. Sep 11 04:43:19.077998 systemd-logind[1476]: Session 6 logged out. Waiting for processes to exit. Sep 11 04:43:19.080064 systemd[1]: Started sshd@6-10.0.0.77:22-10.0.0.1:46128.service - OpenSSH per-connection server daemon (10.0.0.1:46128). Sep 11 04:43:19.080515 systemd-logind[1476]: Removed session 6. Sep 11 04:43:19.134773 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 46128 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:43:19.135860 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:43:19.139515 systemd-logind[1476]: New session 7 of user core. Sep 11 04:43:19.159416 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 11 04:43:19.208696 sudo[1712]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 11 04:43:19.208945 sudo[1712]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 11 04:43:19.476646 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 11 04:43:19.493536 (dockerd)[1732]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 11 04:43:19.676399 dockerd[1732]: time="2025-09-11T04:43:19.676337440Z" level=info msg="Starting up" Sep 11 04:43:19.677824 dockerd[1732]: time="2025-09-11T04:43:19.677803597Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 11 04:43:19.687254 dockerd[1732]: time="2025-09-11T04:43:19.687224724Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 11 04:43:19.781619 dockerd[1732]: time="2025-09-11T04:43:19.781533781Z" level=info msg="Loading containers: start." Sep 11 04:43:19.789266 kernel: Initializing XFRM netlink socket Sep 11 04:43:19.970554 systemd-networkd[1417]: docker0: Link UP Sep 11 04:43:19.973376 dockerd[1732]: time="2025-09-11T04:43:19.973345738Z" level=info msg="Loading containers: done." Sep 11 04:43:19.984589 dockerd[1732]: time="2025-09-11T04:43:19.984551659Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 11 04:43:19.984734 dockerd[1732]: time="2025-09-11T04:43:19.984620441Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 11 04:43:19.984734 dockerd[1732]: time="2025-09-11T04:43:19.984689026Z" level=info msg="Initializing buildkit" Sep 11 04:43:20.005301 dockerd[1732]: time="2025-09-11T04:43:20.005268698Z" level=info msg="Completed buildkit initialization" Sep 11 04:43:20.009928 dockerd[1732]: time="2025-09-11T04:43:20.009890049Z" level=info msg="Daemon has completed initialization" Sep 11 04:43:20.010094 dockerd[1732]: time="2025-09-11T04:43:20.009944803Z" level=info msg="API listen on /run/docker.sock" Sep 11 04:43:20.010176 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 11 04:43:20.579505 containerd[1511]: time="2025-09-11T04:43:20.579468173Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 11 04:43:21.169047 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3024218581.mount: Deactivated successfully. Sep 11 04:43:22.249233 containerd[1511]: time="2025-09-11T04:43:22.249167903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:22.250229 containerd[1511]: time="2025-09-11T04:43:22.250143912Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Sep 11 04:43:22.251161 containerd[1511]: time="2025-09-11T04:43:22.251126404Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:22.254314 containerd[1511]: time="2025-09-11T04:43:22.254267873Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:22.256173 containerd[1511]: time="2025-09-11T04:43:22.256037284Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.676532374s" Sep 11 04:43:22.256173 containerd[1511]: time="2025-09-11T04:43:22.256069978Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 11 04:43:22.256631 containerd[1511]: time="2025-09-11T04:43:22.256602893Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 11 04:43:23.641907 containerd[1511]: time="2025-09-11T04:43:23.641461558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:23.641907 containerd[1511]: time="2025-09-11T04:43:23.641813294Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Sep 11 04:43:23.645048 containerd[1511]: time="2025-09-11T04:43:23.645000451Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:23.647232 containerd[1511]: time="2025-09-11T04:43:23.647187276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:23.648240 containerd[1511]: time="2025-09-11T04:43:23.648204632Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.391504199s" Sep 11 04:43:23.648297 containerd[1511]: time="2025-09-11T04:43:23.648242680Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 11 04:43:23.648643 containerd[1511]: time="2025-09-11T04:43:23.648613934Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 11 04:43:24.723135 containerd[1511]: time="2025-09-11T04:43:24.723081413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:24.724550 containerd[1511]: time="2025-09-11T04:43:24.724520336Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Sep 11 04:43:24.725476 containerd[1511]: time="2025-09-11T04:43:24.725444612Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:24.728459 containerd[1511]: time="2025-09-11T04:43:24.728430603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:24.730166 containerd[1511]: time="2025-09-11T04:43:24.730137433Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.081494072s" Sep 11 04:43:24.730199 containerd[1511]: time="2025-09-11T04:43:24.730174737Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 11 04:43:24.730599 containerd[1511]: time="2025-09-11T04:43:24.730570533Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 11 04:43:25.082558 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 11 04:43:25.083879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 04:43:25.212184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 04:43:25.216041 (kubelet)[2026]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 04:43:25.252094 kubelet[2026]: E0911 04:43:25.252048 2026 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 04:43:25.255018 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 04:43:25.255143 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 04:43:25.255510 systemd[1]: kubelet.service: Consumed 138ms CPU time, 107.8M memory peak. Sep 11 04:43:25.796816 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1668052674.mount: Deactivated successfully. Sep 11 04:43:26.138422 containerd[1511]: time="2025-09-11T04:43:26.138380765Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:26.139039 containerd[1511]: time="2025-09-11T04:43:26.138983061Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Sep 11 04:43:26.139699 containerd[1511]: time="2025-09-11T04:43:26.139676490Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:26.141470 containerd[1511]: time="2025-09-11T04:43:26.141447704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:26.141939 containerd[1511]: time="2025-09-11T04:43:26.141904720Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.41130194s" Sep 11 04:43:26.141939 containerd[1511]: time="2025-09-11T04:43:26.141934555Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 11 04:43:26.142479 containerd[1511]: time="2025-09-11T04:43:26.142305284Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 11 04:43:26.710534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1088406704.mount: Deactivated successfully. Sep 11 04:43:27.501437 containerd[1511]: time="2025-09-11T04:43:27.501396815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:27.502174 containerd[1511]: time="2025-09-11T04:43:27.501842931Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 11 04:43:27.502917 containerd[1511]: time="2025-09-11T04:43:27.502878264Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:27.506393 containerd[1511]: time="2025-09-11T04:43:27.506361473Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:27.508037 containerd[1511]: time="2025-09-11T04:43:27.508008129Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.365671293s" Sep 11 04:43:27.508286 containerd[1511]: time="2025-09-11T04:43:27.508248682Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 11 04:43:27.508845 containerd[1511]: time="2025-09-11T04:43:27.508797461Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 11 04:43:27.945675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2669277083.mount: Deactivated successfully. Sep 11 04:43:27.950311 containerd[1511]: time="2025-09-11T04:43:27.950268956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 04:43:27.951335 containerd[1511]: time="2025-09-11T04:43:27.951307549Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 11 04:43:27.952167 containerd[1511]: time="2025-09-11T04:43:27.952143799Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 04:43:27.954044 containerd[1511]: time="2025-09-11T04:43:27.954001783Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 11 04:43:27.955060 containerd[1511]: time="2025-09-11T04:43:27.954588971Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 445.759185ms" Sep 11 04:43:27.955060 containerd[1511]: time="2025-09-11T04:43:27.954618911Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 11 04:43:27.955331 containerd[1511]: time="2025-09-11T04:43:27.955306297Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 11 04:43:28.485161 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount710515438.mount: Deactivated successfully. Sep 11 04:43:31.104611 containerd[1511]: time="2025-09-11T04:43:31.103968252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:31.141885 containerd[1511]: time="2025-09-11T04:43:31.141843523Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 11 04:43:31.143110 containerd[1511]: time="2025-09-11T04:43:31.143062954Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:31.147144 containerd[1511]: time="2025-09-11T04:43:31.146496837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:31.148087 containerd[1511]: time="2025-09-11T04:43:31.147652253Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.192317191s" Sep 11 04:43:31.148087 containerd[1511]: time="2025-09-11T04:43:31.147688606Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 11 04:43:35.332540 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 11 04:43:35.333981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 04:43:35.463813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 04:43:35.467401 (kubelet)[2182]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 11 04:43:35.499626 kubelet[2182]: E0911 04:43:35.499586 2182 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 11 04:43:35.501856 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 11 04:43:35.501970 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 11 04:43:35.502363 systemd[1]: kubelet.service: Consumed 127ms CPU time, 105.6M memory peak. Sep 11 04:43:37.010154 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 04:43:37.010324 systemd[1]: kubelet.service: Consumed 127ms CPU time, 105.6M memory peak. Sep 11 04:43:37.012061 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 04:43:37.028865 systemd[1]: Reload requested from client PID 2196 ('systemctl') (unit session-7.scope)... Sep 11 04:43:37.028878 systemd[1]: Reloading... Sep 11 04:43:37.080235 zram_generator::config[2239]: No configuration found. Sep 11 04:43:37.235577 systemd[1]: Reloading finished in 206 ms. Sep 11 04:43:37.292010 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 11 04:43:37.292079 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 11 04:43:37.292328 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 04:43:37.294414 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 04:43:37.418501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 04:43:37.421774 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 04:43:37.456230 kubelet[2283]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 04:43:37.456230 kubelet[2283]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 04:43:37.456230 kubelet[2283]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 04:43:37.456493 kubelet[2283]: I0911 04:43:37.456276 2283 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 04:43:38.263927 kubelet[2283]: I0911 04:43:38.263879 2283 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 11 04:43:38.263927 kubelet[2283]: I0911 04:43:38.263912 2283 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 04:43:38.264196 kubelet[2283]: I0911 04:43:38.264167 2283 server.go:954] "Client rotation is on, will bootstrap in background" Sep 11 04:43:38.281475 kubelet[2283]: E0911 04:43:38.281442 2283 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Sep 11 04:43:38.282845 kubelet[2283]: I0911 04:43:38.282825 2283 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 04:43:38.290420 kubelet[2283]: I0911 04:43:38.290398 2283 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 04:43:38.293009 kubelet[2283]: I0911 04:43:38.292990 2283 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 04:43:38.294203 kubelet[2283]: I0911 04:43:38.294151 2283 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 04:43:38.294384 kubelet[2283]: I0911 04:43:38.294197 2283 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 04:43:38.294478 kubelet[2283]: I0911 04:43:38.294457 2283 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 04:43:38.294478 kubelet[2283]: I0911 04:43:38.294466 2283 container_manager_linux.go:304] "Creating device plugin manager" Sep 11 04:43:38.294651 kubelet[2283]: I0911 04:43:38.294634 2283 state_mem.go:36] "Initialized new in-memory state store" Sep 11 04:43:38.296993 kubelet[2283]: I0911 04:43:38.296961 2283 kubelet.go:446] "Attempting to sync node with API server" Sep 11 04:43:38.296993 kubelet[2283]: I0911 04:43:38.296987 2283 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 04:43:38.297039 kubelet[2283]: I0911 04:43:38.297011 2283 kubelet.go:352] "Adding apiserver pod source" Sep 11 04:43:38.297039 kubelet[2283]: I0911 04:43:38.297021 2283 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 04:43:38.299061 kubelet[2283]: W0911 04:43:38.299013 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Sep 11 04:43:38.299097 kubelet[2283]: E0911 04:43:38.299071 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Sep 11 04:43:38.300255 kubelet[2283]: I0911 04:43:38.300231 2283 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 11 04:43:38.300296 kubelet[2283]: W0911 04:43:38.300247 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Sep 11 04:43:38.300321 kubelet[2283]: E0911 04:43:38.300302 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Sep 11 04:43:38.300839 kubelet[2283]: I0911 04:43:38.300826 2283 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 04:43:38.300947 kubelet[2283]: W0911 04:43:38.300935 2283 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 11 04:43:38.301790 kubelet[2283]: I0911 04:43:38.301773 2283 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 04:43:38.301839 kubelet[2283]: I0911 04:43:38.301812 2283 server.go:1287] "Started kubelet" Sep 11 04:43:38.303246 kubelet[2283]: I0911 04:43:38.302324 2283 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 04:43:38.303246 kubelet[2283]: I0911 04:43:38.302620 2283 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 04:43:38.303246 kubelet[2283]: I0911 04:43:38.302674 2283 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 04:43:38.303246 kubelet[2283]: I0911 04:43:38.303021 2283 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 04:43:38.303610 kubelet[2283]: I0911 04:43:38.303592 2283 server.go:479] "Adding debug handlers to kubelet server" Sep 11 04:43:38.305025 kubelet[2283]: I0911 04:43:38.304994 2283 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 04:43:38.306358 kubelet[2283]: I0911 04:43:38.306331 2283 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 04:43:38.306624 kubelet[2283]: E0911 04:43:38.306602 2283 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 04:43:38.307226 kubelet[2283]: I0911 04:43:38.307186 2283 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 04:43:38.307286 kubelet[2283]: I0911 04:43:38.307248 2283 reconciler.go:26] "Reconciler: start to sync state" Sep 11 04:43:38.307994 kubelet[2283]: E0911 04:43:38.307759 2283 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186420ccbd3f3747 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 04:43:38.301790023 +0000 UTC m=+0.877224433,LastTimestamp:2025-09-11 04:43:38.301790023 +0000 UTC m=+0.877224433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 04:43:38.308081 kubelet[2283]: W0911 04:43:38.308034 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Sep 11 04:43:38.308081 kubelet[2283]: E0911 04:43:38.308071 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Sep 11 04:43:38.308153 kubelet[2283]: E0911 04:43:38.308127 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="200ms" Sep 11 04:43:38.309816 kubelet[2283]: I0911 04:43:38.309797 2283 factory.go:221] Registration of the systemd container factory successfully Sep 11 04:43:38.309979 kubelet[2283]: I0911 04:43:38.309961 2283 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 04:43:38.311337 kubelet[2283]: I0911 04:43:38.311294 2283 factory.go:221] Registration of the containerd container factory successfully Sep 11 04:43:38.320903 kubelet[2283]: I0911 04:43:38.320849 2283 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 04:43:38.321743 kubelet[2283]: I0911 04:43:38.321712 2283 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 04:43:38.321743 kubelet[2283]: I0911 04:43:38.321739 2283 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 11 04:43:38.321806 kubelet[2283]: I0911 04:43:38.321755 2283 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 04:43:38.321806 kubelet[2283]: I0911 04:43:38.321762 2283 kubelet.go:2382] "Starting kubelet main sync loop" Sep 11 04:43:38.321806 kubelet[2283]: E0911 04:43:38.321797 2283 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 04:43:38.322604 kubelet[2283]: W0911 04:43:38.322156 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Sep 11 04:43:38.322604 kubelet[2283]: E0911 04:43:38.322203 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Sep 11 04:43:38.325404 kubelet[2283]: E0911 04:43:38.325324 2283 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186420ccbd3f3747 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-11 04:43:38.301790023 +0000 UTC m=+0.877224433,LastTimestamp:2025-09-11 04:43:38.301790023 +0000 UTC m=+0.877224433,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 11 04:43:38.325747 kubelet[2283]: I0911 04:43:38.325731 2283 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 04:43:38.325809 kubelet[2283]: I0911 04:43:38.325768 2283 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 04:43:38.325809 kubelet[2283]: I0911 04:43:38.325795 2283 state_mem.go:36] "Initialized new in-memory state store" Sep 11 04:43:38.401698 kubelet[2283]: I0911 04:43:38.401651 2283 policy_none.go:49] "None policy: Start" Sep 11 04:43:38.401698 kubelet[2283]: I0911 04:43:38.401702 2283 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 04:43:38.401834 kubelet[2283]: I0911 04:43:38.401722 2283 state_mem.go:35] "Initializing new in-memory state store" Sep 11 04:43:38.407191 kubelet[2283]: E0911 04:43:38.407038 2283 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 04:43:38.407698 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 11 04:43:38.422592 kubelet[2283]: E0911 04:43:38.422554 2283 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 11 04:43:38.435137 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 11 04:43:38.450360 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 11 04:43:38.451561 kubelet[2283]: I0911 04:43:38.451529 2283 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 04:43:38.451849 kubelet[2283]: I0911 04:43:38.451698 2283 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 04:43:38.451849 kubelet[2283]: I0911 04:43:38.451715 2283 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 04:43:38.451918 kubelet[2283]: I0911 04:43:38.451893 2283 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 04:43:38.452708 kubelet[2283]: E0911 04:43:38.452674 2283 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 04:43:38.452756 kubelet[2283]: E0911 04:43:38.452717 2283 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 11 04:43:38.508486 kubelet[2283]: E0911 04:43:38.508446 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="400ms" Sep 11 04:43:38.553846 kubelet[2283]: I0911 04:43:38.553643 2283 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 04:43:38.554150 kubelet[2283]: E0911 04:43:38.554063 2283 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Sep 11 04:43:38.630197 systemd[1]: Created slice kubepods-burstable-pod3c49e9b2282ec35fe93212a30424dbbc.slice - libcontainer container kubepods-burstable-pod3c49e9b2282ec35fe93212a30424dbbc.slice. Sep 11 04:43:38.652659 kubelet[2283]: E0911 04:43:38.652621 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 04:43:38.655331 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 11 04:43:38.656932 kubelet[2283]: E0911 04:43:38.656909 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 04:43:38.658718 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 11 04:43:38.660108 kubelet[2283]: E0911 04:43:38.660080 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 04:43:38.755488 kubelet[2283]: I0911 04:43:38.755449 2283 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 04:43:38.755870 kubelet[2283]: E0911 04:43:38.755830 2283 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Sep 11 04:43:38.808731 kubelet[2283]: I0911 04:43:38.808568 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 11 04:43:38.808731 kubelet[2283]: I0911 04:43:38.808626 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c49e9b2282ec35fe93212a30424dbbc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c49e9b2282ec35fe93212a30424dbbc\") " pod="kube-system/kube-apiserver-localhost" Sep 11 04:43:38.808731 kubelet[2283]: I0911 04:43:38.808646 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c49e9b2282ec35fe93212a30424dbbc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c49e9b2282ec35fe93212a30424dbbc\") " pod="kube-system/kube-apiserver-localhost" Sep 11 04:43:38.808731 kubelet[2283]: I0911 04:43:38.808719 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c49e9b2282ec35fe93212a30424dbbc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3c49e9b2282ec35fe93212a30424dbbc\") " pod="kube-system/kube-apiserver-localhost" Sep 11 04:43:38.808855 kubelet[2283]: I0911 04:43:38.808760 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 04:43:38.808855 kubelet[2283]: I0911 04:43:38.808791 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 04:43:38.808855 kubelet[2283]: I0911 04:43:38.808817 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 04:43:38.808855 kubelet[2283]: I0911 04:43:38.808847 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 04:43:38.808936 kubelet[2283]: I0911 04:43:38.808863 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 04:43:38.909041 kubelet[2283]: E0911 04:43:38.908982 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="800ms" Sep 11 04:43:38.953794 kubelet[2283]: E0911 04:43:38.953710 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:38.954354 containerd[1511]: time="2025-09-11T04:43:38.954310529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3c49e9b2282ec35fe93212a30424dbbc,Namespace:kube-system,Attempt:0,}" Sep 11 04:43:38.957582 kubelet[2283]: E0911 04:43:38.957481 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:38.957889 containerd[1511]: time="2025-09-11T04:43:38.957844539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 11 04:43:38.961093 kubelet[2283]: E0911 04:43:38.961062 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:38.961414 containerd[1511]: time="2025-09-11T04:43:38.961385698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 11 04:43:38.975291 containerd[1511]: time="2025-09-11T04:43:38.975254465Z" level=info msg="connecting to shim 97bb149ed6212ca7e2ddb27a47364124d2cbc68d727992758a12dc2d701d609a" address="unix:///run/containerd/s/ed15f85fa5286ccd063e4f362ce3944e430f50216df4b23a7f7dda60d234653c" namespace=k8s.io protocol=ttrpc version=3 Sep 11 04:43:38.984313 containerd[1511]: time="2025-09-11T04:43:38.984271118Z" level=info msg="connecting to shim 58a90dabaa415c38c03d853296c3c76726839f927d0481ea927d7e9fe7b76a23" address="unix:///run/containerd/s/a75f634f9974672bc8bd40dd94b335ba79193822397716c369e97e958a253280" namespace=k8s.io protocol=ttrpc version=3 Sep 11 04:43:39.005049 containerd[1511]: time="2025-09-11T04:43:39.005010601Z" level=info msg="connecting to shim dc7ce266e1870e9e4e24ff3a54804a1cc8d8df8e769bfe272665d17599741ad3" address="unix:///run/containerd/s/4fb69a7fbd17d06c5d65229a18bfbc7550dac372c7160c6707cfcc94905c2cd8" namespace=k8s.io protocol=ttrpc version=3 Sep 11 04:43:39.006380 systemd[1]: Started cri-containerd-58a90dabaa415c38c03d853296c3c76726839f927d0481ea927d7e9fe7b76a23.scope - libcontainer container 58a90dabaa415c38c03d853296c3c76726839f927d0481ea927d7e9fe7b76a23. Sep 11 04:43:39.009269 systemd[1]: Started cri-containerd-97bb149ed6212ca7e2ddb27a47364124d2cbc68d727992758a12dc2d701d609a.scope - libcontainer container 97bb149ed6212ca7e2ddb27a47364124d2cbc68d727992758a12dc2d701d609a. Sep 11 04:43:39.029631 systemd[1]: Started cri-containerd-dc7ce266e1870e9e4e24ff3a54804a1cc8d8df8e769bfe272665d17599741ad3.scope - libcontainer container dc7ce266e1870e9e4e24ff3a54804a1cc8d8df8e769bfe272665d17599741ad3. Sep 11 04:43:39.042562 containerd[1511]: time="2025-09-11T04:43:39.042515022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"58a90dabaa415c38c03d853296c3c76726839f927d0481ea927d7e9fe7b76a23\"" Sep 11 04:43:39.043528 kubelet[2283]: E0911 04:43:39.043508 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:39.046186 containerd[1511]: time="2025-09-11T04:43:39.046031981Z" level=info msg="CreateContainer within sandbox \"58a90dabaa415c38c03d853296c3c76726839f927d0481ea927d7e9fe7b76a23\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 11 04:43:39.056858 containerd[1511]: time="2025-09-11T04:43:39.056826167Z" level=info msg="Container 566ae4f3b24f54b1bd8ad04142f32f95f3f01a7b144a9cd5ff52cd4158ca9fcd: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:43:39.063092 containerd[1511]: time="2025-09-11T04:43:39.063004238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:3c49e9b2282ec35fe93212a30424dbbc,Namespace:kube-system,Attempt:0,} returns sandbox id \"97bb149ed6212ca7e2ddb27a47364124d2cbc68d727992758a12dc2d701d609a\"" Sep 11 04:43:39.064314 kubelet[2283]: E0911 04:43:39.064289 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:39.064763 containerd[1511]: time="2025-09-11T04:43:39.064730636Z" level=info msg="CreateContainer within sandbox \"58a90dabaa415c38c03d853296c3c76726839f927d0481ea927d7e9fe7b76a23\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"566ae4f3b24f54b1bd8ad04142f32f95f3f01a7b144a9cd5ff52cd4158ca9fcd\"" Sep 11 04:43:39.065565 containerd[1511]: time="2025-09-11T04:43:39.065511455Z" level=info msg="StartContainer for \"566ae4f3b24f54b1bd8ad04142f32f95f3f01a7b144a9cd5ff52cd4158ca9fcd\"" Sep 11 04:43:39.066036 containerd[1511]: time="2025-09-11T04:43:39.066003542Z" level=info msg="CreateContainer within sandbox \"97bb149ed6212ca7e2ddb27a47364124d2cbc68d727992758a12dc2d701d609a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 11 04:43:39.067447 containerd[1511]: time="2025-09-11T04:43:39.067420313Z" level=info msg="connecting to shim 566ae4f3b24f54b1bd8ad04142f32f95f3f01a7b144a9cd5ff52cd4158ca9fcd" address="unix:///run/containerd/s/a75f634f9974672bc8bd40dd94b335ba79193822397716c369e97e958a253280" protocol=ttrpc version=3 Sep 11 04:43:39.070764 containerd[1511]: time="2025-09-11T04:43:39.070730203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc7ce266e1870e9e4e24ff3a54804a1cc8d8df8e769bfe272665d17599741ad3\"" Sep 11 04:43:39.071637 kubelet[2283]: E0911 04:43:39.071613 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:39.073530 containerd[1511]: time="2025-09-11T04:43:39.073426272Z" level=info msg="Container b8a73a5bd116ca36a30dbfd0f2917d5327127dfd2a19bea0103f964f5481fae3: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:43:39.073701 containerd[1511]: time="2025-09-11T04:43:39.073676331Z" level=info msg="CreateContainer within sandbox \"dc7ce266e1870e9e4e24ff3a54804a1cc8d8df8e769bfe272665d17599741ad3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 11 04:43:39.081559 containerd[1511]: time="2025-09-11T04:43:39.081527425Z" level=info msg="Container 7987ceceb31a36378f360097a88c51a2118a01aee5b604f207011f8d02c57131: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:43:39.082269 containerd[1511]: time="2025-09-11T04:43:39.082199614Z" level=info msg="CreateContainer within sandbox \"97bb149ed6212ca7e2ddb27a47364124d2cbc68d727992758a12dc2d701d609a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b8a73a5bd116ca36a30dbfd0f2917d5327127dfd2a19bea0103f964f5481fae3\"" Sep 11 04:43:39.082724 containerd[1511]: time="2025-09-11T04:43:39.082701049Z" level=info msg="StartContainer for \"b8a73a5bd116ca36a30dbfd0f2917d5327127dfd2a19bea0103f964f5481fae3\"" Sep 11 04:43:39.083362 systemd[1]: Started cri-containerd-566ae4f3b24f54b1bd8ad04142f32f95f3f01a7b144a9cd5ff52cd4158ca9fcd.scope - libcontainer container 566ae4f3b24f54b1bd8ad04142f32f95f3f01a7b144a9cd5ff52cd4158ca9fcd. Sep 11 04:43:39.083797 containerd[1511]: time="2025-09-11T04:43:39.083772877Z" level=info msg="connecting to shim b8a73a5bd116ca36a30dbfd0f2917d5327127dfd2a19bea0103f964f5481fae3" address="unix:///run/containerd/s/ed15f85fa5286ccd063e4f362ce3944e430f50216df4b23a7f7dda60d234653c" protocol=ttrpc version=3 Sep 11 04:43:39.090942 containerd[1511]: time="2025-09-11T04:43:39.090902481Z" level=info msg="CreateContainer within sandbox \"dc7ce266e1870e9e4e24ff3a54804a1cc8d8df8e769bfe272665d17599741ad3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7987ceceb31a36378f360097a88c51a2118a01aee5b604f207011f8d02c57131\"" Sep 11 04:43:39.092382 containerd[1511]: time="2025-09-11T04:43:39.092356728Z" level=info msg="StartContainer for \"7987ceceb31a36378f360097a88c51a2118a01aee5b604f207011f8d02c57131\"" Sep 11 04:43:39.093742 containerd[1511]: time="2025-09-11T04:43:39.093689481Z" level=info msg="connecting to shim 7987ceceb31a36378f360097a88c51a2118a01aee5b604f207011f8d02c57131" address="unix:///run/containerd/s/4fb69a7fbd17d06c5d65229a18bfbc7550dac372c7160c6707cfcc94905c2cd8" protocol=ttrpc version=3 Sep 11 04:43:39.103379 systemd[1]: Started cri-containerd-b8a73a5bd116ca36a30dbfd0f2917d5327127dfd2a19bea0103f964f5481fae3.scope - libcontainer container b8a73a5bd116ca36a30dbfd0f2917d5327127dfd2a19bea0103f964f5481fae3. Sep 11 04:43:39.111401 systemd[1]: Started cri-containerd-7987ceceb31a36378f360097a88c51a2118a01aee5b604f207011f8d02c57131.scope - libcontainer container 7987ceceb31a36378f360097a88c51a2118a01aee5b604f207011f8d02c57131. Sep 11 04:43:39.138502 containerd[1511]: time="2025-09-11T04:43:39.138041925Z" level=info msg="StartContainer for \"566ae4f3b24f54b1bd8ad04142f32f95f3f01a7b144a9cd5ff52cd4158ca9fcd\" returns successfully" Sep 11 04:43:39.155422 containerd[1511]: time="2025-09-11T04:43:39.155361722Z" level=info msg="StartContainer for \"b8a73a5bd116ca36a30dbfd0f2917d5327127dfd2a19bea0103f964f5481fae3\" returns successfully" Sep 11 04:43:39.157982 kubelet[2283]: I0911 04:43:39.157943 2283 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 04:43:39.158447 kubelet[2283]: E0911 04:43:39.158402 2283 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Sep 11 04:43:39.170415 containerd[1511]: time="2025-09-11T04:43:39.170382132Z" level=info msg="StartContainer for \"7987ceceb31a36378f360097a88c51a2118a01aee5b604f207011f8d02c57131\" returns successfully" Sep 11 04:43:39.194489 kubelet[2283]: W0911 04:43:39.194392 2283 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Sep 11 04:43:39.194489 kubelet[2283]: E0911 04:43:39.194487 2283 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Sep 11 04:43:39.331790 kubelet[2283]: E0911 04:43:39.331658 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 04:43:39.332466 kubelet[2283]: E0911 04:43:39.332451 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:39.335421 kubelet[2283]: E0911 04:43:39.335258 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 04:43:39.335421 kubelet[2283]: E0911 04:43:39.335373 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:39.337500 kubelet[2283]: E0911 04:43:39.337483 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 04:43:39.337594 kubelet[2283]: E0911 04:43:39.337581 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:39.959720 kubelet[2283]: I0911 04:43:39.959688 2283 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 04:43:40.342369 kubelet[2283]: E0911 04:43:40.341043 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 04:43:40.342369 kubelet[2283]: E0911 04:43:40.341158 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:40.342369 kubelet[2283]: E0911 04:43:40.341391 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 04:43:40.342369 kubelet[2283]: E0911 04:43:40.341467 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:40.342864 kubelet[2283]: E0911 04:43:40.342846 2283 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 11 04:43:40.342967 kubelet[2283]: E0911 04:43:40.342949 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:40.566518 kubelet[2283]: E0911 04:43:40.566469 2283 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 11 04:43:40.656879 kubelet[2283]: I0911 04:43:40.656492 2283 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 04:43:40.656879 kubelet[2283]: E0911 04:43:40.656529 2283 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 11 04:43:40.680429 kubelet[2283]: E0911 04:43:40.680395 2283 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 04:43:40.781285 kubelet[2283]: E0911 04:43:40.781245 2283 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 04:43:40.881663 kubelet[2283]: E0911 04:43:40.881604 2283 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 04:43:40.982206 kubelet[2283]: E0911 04:43:40.982111 2283 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 04:43:41.007762 kubelet[2283]: I0911 04:43:41.007492 2283 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 04:43:41.013046 kubelet[2283]: E0911 04:43:41.013010 2283 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 11 04:43:41.013046 kubelet[2283]: I0911 04:43:41.013032 2283 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 04:43:41.014776 kubelet[2283]: E0911 04:43:41.014606 2283 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 11 04:43:41.014776 kubelet[2283]: I0911 04:43:41.014626 2283 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 04:43:41.015931 kubelet[2283]: E0911 04:43:41.015905 2283 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 11 04:43:41.299583 kubelet[2283]: I0911 04:43:41.299473 2283 apiserver.go:52] "Watching apiserver" Sep 11 04:43:41.307799 kubelet[2283]: I0911 04:43:41.307741 2283 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 04:43:41.341275 kubelet[2283]: I0911 04:43:41.341251 2283 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 04:43:41.343066 kubelet[2283]: E0911 04:43:41.343038 2283 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 11 04:43:41.343288 kubelet[2283]: E0911 04:43:41.343271 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:41.362472 kubelet[2283]: I0911 04:43:41.362454 2283 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 04:43:41.364495 kubelet[2283]: E0911 04:43:41.364327 2283 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 11 04:43:41.364495 kubelet[2283]: E0911 04:43:41.364452 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:42.355650 systemd[1]: Reload requested from client PID 2560 ('systemctl') (unit session-7.scope)... Sep 11 04:43:42.355664 systemd[1]: Reloading... Sep 11 04:43:42.413359 zram_generator::config[2606]: No configuration found. Sep 11 04:43:42.576342 systemd[1]: Reloading finished in 220 ms. Sep 11 04:43:42.601404 kubelet[2283]: I0911 04:43:42.601365 2283 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 04:43:42.601547 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 04:43:42.622161 systemd[1]: kubelet.service: Deactivated successfully. Sep 11 04:43:42.622428 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 04:43:42.622484 systemd[1]: kubelet.service: Consumed 1.215s CPU time, 128.1M memory peak. Sep 11 04:43:42.624013 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 11 04:43:42.746922 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 11 04:43:42.750917 (kubelet)[2645]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 11 04:43:42.789389 kubelet[2645]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 04:43:42.789389 kubelet[2645]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 11 04:43:42.789389 kubelet[2645]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 11 04:43:42.789389 kubelet[2645]: I0911 04:43:42.789280 2645 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 11 04:43:42.795060 kubelet[2645]: I0911 04:43:42.795021 2645 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 11 04:43:42.795060 kubelet[2645]: I0911 04:43:42.795050 2645 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 11 04:43:42.795337 kubelet[2645]: I0911 04:43:42.795313 2645 server.go:954] "Client rotation is on, will bootstrap in background" Sep 11 04:43:42.796529 kubelet[2645]: I0911 04:43:42.796512 2645 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 11 04:43:42.798704 kubelet[2645]: I0911 04:43:42.798652 2645 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 11 04:43:42.803869 kubelet[2645]: I0911 04:43:42.803848 2645 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 11 04:43:42.806417 kubelet[2645]: I0911 04:43:42.806398 2645 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 11 04:43:42.806626 kubelet[2645]: I0911 04:43:42.806600 2645 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 11 04:43:42.806801 kubelet[2645]: I0911 04:43:42.806627 2645 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 11 04:43:42.806870 kubelet[2645]: I0911 04:43:42.806816 2645 topology_manager.go:138] "Creating topology manager with none policy" Sep 11 04:43:42.806870 kubelet[2645]: I0911 04:43:42.806824 2645 container_manager_linux.go:304] "Creating device plugin manager" Sep 11 04:43:42.806870 kubelet[2645]: I0911 04:43:42.806864 2645 state_mem.go:36] "Initialized new in-memory state store" Sep 11 04:43:42.806991 kubelet[2645]: I0911 04:43:42.806980 2645 kubelet.go:446] "Attempting to sync node with API server" Sep 11 04:43:42.807022 kubelet[2645]: I0911 04:43:42.806992 2645 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 11 04:43:42.807022 kubelet[2645]: I0911 04:43:42.807013 2645 kubelet.go:352] "Adding apiserver pod source" Sep 11 04:43:42.807303 kubelet[2645]: I0911 04:43:42.807024 2645 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 11 04:43:42.807602 kubelet[2645]: I0911 04:43:42.807563 2645 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 11 04:43:42.808058 kubelet[2645]: I0911 04:43:42.808036 2645 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 11 04:43:42.808574 kubelet[2645]: I0911 04:43:42.808553 2645 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 11 04:43:42.808647 kubelet[2645]: I0911 04:43:42.808584 2645 server.go:1287] "Started kubelet" Sep 11 04:43:42.809856 kubelet[2645]: I0911 04:43:42.809810 2645 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 11 04:43:42.810207 kubelet[2645]: I0911 04:43:42.810192 2645 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 11 04:43:42.810365 kubelet[2645]: I0911 04:43:42.810347 2645 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 11 04:43:42.810484 kubelet[2645]: I0911 04:43:42.810443 2645 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 11 04:43:42.811546 kubelet[2645]: E0911 04:43:42.811475 2645 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 11 04:43:42.811546 kubelet[2645]: I0911 04:43:42.811502 2645 server.go:479] "Adding debug handlers to kubelet server" Sep 11 04:43:42.811689 kubelet[2645]: I0911 04:43:42.811523 2645 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 11 04:43:42.811689 kubelet[2645]: I0911 04:43:42.811671 2645 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 11 04:43:42.811946 kubelet[2645]: I0911 04:43:42.811861 2645 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 11 04:43:42.812015 kubelet[2645]: I0911 04:43:42.811997 2645 reconciler.go:26] "Reconciler: start to sync state" Sep 11 04:43:42.813028 kubelet[2645]: I0911 04:43:42.813009 2645 factory.go:221] Registration of the systemd container factory successfully Sep 11 04:43:42.813185 kubelet[2645]: I0911 04:43:42.813165 2645 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 11 04:43:42.816231 kubelet[2645]: E0911 04:43:42.814514 2645 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 11 04:43:42.820337 kubelet[2645]: I0911 04:43:42.820317 2645 factory.go:221] Registration of the containerd container factory successfully Sep 11 04:43:42.829303 kubelet[2645]: I0911 04:43:42.829265 2645 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 11 04:43:42.830606 kubelet[2645]: I0911 04:43:42.830562 2645 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 11 04:43:42.830704 kubelet[2645]: I0911 04:43:42.830693 2645 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 11 04:43:42.830765 kubelet[2645]: I0911 04:43:42.830756 2645 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 11 04:43:42.830818 kubelet[2645]: I0911 04:43:42.830809 2645 kubelet.go:2382] "Starting kubelet main sync loop" Sep 11 04:43:42.831382 kubelet[2645]: E0911 04:43:42.831360 2645 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 11 04:43:42.859524 kubelet[2645]: I0911 04:43:42.859502 2645 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 11 04:43:42.859524 kubelet[2645]: I0911 04:43:42.859522 2645 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 11 04:43:42.859614 kubelet[2645]: I0911 04:43:42.859540 2645 state_mem.go:36] "Initialized new in-memory state store" Sep 11 04:43:42.859692 kubelet[2645]: I0911 04:43:42.859675 2645 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 11 04:43:42.859720 kubelet[2645]: I0911 04:43:42.859691 2645 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 11 04:43:42.859720 kubelet[2645]: I0911 04:43:42.859708 2645 policy_none.go:49] "None policy: Start" Sep 11 04:43:42.859720 kubelet[2645]: I0911 04:43:42.859715 2645 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 11 04:43:42.859781 kubelet[2645]: I0911 04:43:42.859724 2645 state_mem.go:35] "Initializing new in-memory state store" Sep 11 04:43:42.859820 kubelet[2645]: I0911 04:43:42.859809 2645 state_mem.go:75] "Updated machine memory state" Sep 11 04:43:42.863523 kubelet[2645]: I0911 04:43:42.863487 2645 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 11 04:43:42.863643 kubelet[2645]: I0911 04:43:42.863627 2645 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 11 04:43:42.863681 kubelet[2645]: I0911 04:43:42.863644 2645 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 11 04:43:42.866153 kubelet[2645]: I0911 04:43:42.865023 2645 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 11 04:43:42.867893 kubelet[2645]: E0911 04:43:42.867865 2645 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 11 04:43:42.932340 kubelet[2645]: I0911 04:43:42.932164 2645 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 04:43:42.932340 kubelet[2645]: I0911 04:43:42.932203 2645 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 11 04:43:42.932340 kubelet[2645]: I0911 04:43:42.932177 2645 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 11 04:43:42.967798 kubelet[2645]: I0911 04:43:42.967776 2645 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 11 04:43:42.972312 kubelet[2645]: I0911 04:43:42.972290 2645 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 11 04:43:42.972386 kubelet[2645]: I0911 04:43:42.972364 2645 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 11 04:43:43.013586 kubelet[2645]: I0911 04:43:43.013448 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 04:43:43.013586 kubelet[2645]: I0911 04:43:43.013483 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 04:43:43.013586 kubelet[2645]: I0911 04:43:43.013503 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 04:43:43.013586 kubelet[2645]: I0911 04:43:43.013520 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 04:43:43.013586 kubelet[2645]: I0911 04:43:43.013540 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 11 04:43:43.013762 kubelet[2645]: I0911 04:43:43.013559 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 11 04:43:43.013762 kubelet[2645]: I0911 04:43:43.013606 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3c49e9b2282ec35fe93212a30424dbbc-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c49e9b2282ec35fe93212a30424dbbc\") " pod="kube-system/kube-apiserver-localhost" Sep 11 04:43:43.013762 kubelet[2645]: I0911 04:43:43.013652 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3c49e9b2282ec35fe93212a30424dbbc-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"3c49e9b2282ec35fe93212a30424dbbc\") " pod="kube-system/kube-apiserver-localhost" Sep 11 04:43:43.013762 kubelet[2645]: I0911 04:43:43.013674 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3c49e9b2282ec35fe93212a30424dbbc-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"3c49e9b2282ec35fe93212a30424dbbc\") " pod="kube-system/kube-apiserver-localhost" Sep 11 04:43:43.237817 kubelet[2645]: E0911 04:43:43.237605 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:43.237817 kubelet[2645]: E0911 04:43:43.237674 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:43.239277 kubelet[2645]: E0911 04:43:43.239257 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:43.355512 sudo[2681]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 11 04:43:43.355868 sudo[2681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 11 04:43:43.661563 sudo[2681]: pam_unix(sudo:session): session closed for user root Sep 11 04:43:43.807894 kubelet[2645]: I0911 04:43:43.807870 2645 apiserver.go:52] "Watching apiserver" Sep 11 04:43:43.812208 kubelet[2645]: I0911 04:43:43.812173 2645 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 11 04:43:43.848650 kubelet[2645]: I0911 04:43:43.848587 2645 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 11 04:43:43.849038 kubelet[2645]: E0911 04:43:43.848787 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:43.849038 kubelet[2645]: E0911 04:43:43.848900 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:43.855876 kubelet[2645]: E0911 04:43:43.855854 2645 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 11 04:43:43.856128 kubelet[2645]: E0911 04:43:43.856111 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:43.868646 kubelet[2645]: I0911 04:43:43.868595 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.8685831419999999 podStartE2EDuration="1.868583142s" podCreationTimestamp="2025-09-11 04:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 04:43:43.868494724 +0000 UTC m=+1.114387890" watchObservedRunningTime="2025-09-11 04:43:43.868583142 +0000 UTC m=+1.114476268" Sep 11 04:43:43.882858 kubelet[2645]: I0911 04:43:43.882677 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.8826665930000002 podStartE2EDuration="1.882666593s" podCreationTimestamp="2025-09-11 04:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 04:43:43.876136046 +0000 UTC m=+1.122029212" watchObservedRunningTime="2025-09-11 04:43:43.882666593 +0000 UTC m=+1.128559759" Sep 11 04:43:43.890973 kubelet[2645]: I0911 04:43:43.890940 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.890929196 podStartE2EDuration="1.890929196s" podCreationTimestamp="2025-09-11 04:43:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 04:43:43.883090254 +0000 UTC m=+1.128983500" watchObservedRunningTime="2025-09-11 04:43:43.890929196 +0000 UTC m=+1.136822362" Sep 11 04:43:44.850252 kubelet[2645]: E0911 04:43:44.849974 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:44.850759 kubelet[2645]: E0911 04:43:44.850733 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:45.057922 kubelet[2645]: E0911 04:43:45.057897 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:45.530455 sudo[1712]: pam_unix(sudo:session): session closed for user root Sep 11 04:43:45.531476 sshd[1711]: Connection closed by 10.0.0.1 port 46128 Sep 11 04:43:45.533019 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Sep 11 04:43:45.536113 systemd[1]: sshd@6-10.0.0.77:22-10.0.0.1:46128.service: Deactivated successfully. Sep 11 04:43:45.538006 systemd[1]: session-7.scope: Deactivated successfully. Sep 11 04:43:45.538211 systemd[1]: session-7.scope: Consumed 8.132s CPU time, 260M memory peak. Sep 11 04:43:45.539285 systemd-logind[1476]: Session 7 logged out. Waiting for processes to exit. Sep 11 04:43:45.540797 systemd-logind[1476]: Removed session 7. Sep 11 04:43:49.417067 kubelet[2645]: I0911 04:43:49.417016 2645 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 11 04:43:49.417466 containerd[1511]: time="2025-09-11T04:43:49.417377512Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 11 04:43:49.417661 kubelet[2645]: I0911 04:43:49.417578 2645 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 11 04:43:50.300996 systemd[1]: Created slice kubepods-besteffort-pod85d2c59d_d76b_4220_b270_ab1a7e7b06dc.slice - libcontainer container kubepods-besteffort-pod85d2c59d_d76b_4220_b270_ab1a7e7b06dc.slice. Sep 11 04:43:50.316552 systemd[1]: Created slice kubepods-burstable-pod1070ec9e_1959_4684_af5c_385736e842fd.slice - libcontainer container kubepods-burstable-pod1070ec9e_1959_4684_af5c_385736e842fd.slice. Sep 11 04:43:50.362094 kubelet[2645]: I0911 04:43:50.362047 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/85d2c59d-d76b-4220-b270-ab1a7e7b06dc-kube-proxy\") pod \"kube-proxy-dgvtn\" (UID: \"85d2c59d-d76b-4220-b270-ab1a7e7b06dc\") " pod="kube-system/kube-proxy-dgvtn" Sep 11 04:43:50.362094 kubelet[2645]: I0911 04:43:50.362085 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1070ec9e-1959-4684-af5c-385736e842fd-clustermesh-secrets\") pod \"cilium-b262g\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " pod="kube-system/cilium-b262g" Sep 11 04:43:50.362094 kubelet[2645]: I0911 04:43:50.362103 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-host-proc-sys-net\") pod \"cilium-b262g\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " pod="kube-system/cilium-b262g" Sep 11 04:43:50.362286 kubelet[2645]: I0911 04:43:50.362118 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/85d2c59d-d76b-4220-b270-ab1a7e7b06dc-xtables-lock\") pod \"kube-proxy-dgvtn\" (UID: \"85d2c59d-d76b-4220-b270-ab1a7e7b06dc\") " pod="kube-system/kube-proxy-dgvtn" Sep 11 04:43:50.362286 kubelet[2645]: I0911 04:43:50.362133 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/85d2c59d-d76b-4220-b270-ab1a7e7b06dc-lib-modules\") pod \"kube-proxy-dgvtn\" (UID: \"85d2c59d-d76b-4220-b270-ab1a7e7b06dc\") " pod="kube-system/kube-proxy-dgvtn" Sep 11 04:43:50.362286 kubelet[2645]: I0911 04:43:50.362148 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-cilium-run\") pod \"cilium-b262g\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " pod="kube-system/cilium-b262g" Sep 11 04:43:50.362286 kubelet[2645]: I0911 04:43:50.362167 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-hostproc\") pod \"cilium-b262g\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " pod="kube-system/cilium-b262g" Sep 11 04:43:50.362286 kubelet[2645]: I0911 04:43:50.362182 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-cni-path\") pod \"cilium-b262g\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " pod="kube-system/cilium-b262g" Sep 11 04:43:50.362286 kubelet[2645]: I0911 04:43:50.362198 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5q65\" (UniqueName: \"kubernetes.io/projected/85d2c59d-d76b-4220-b270-ab1a7e7b06dc-kube-api-access-b5q65\") pod \"kube-proxy-dgvtn\" (UID: \"85d2c59d-d76b-4220-b270-ab1a7e7b06dc\") " pod="kube-system/kube-proxy-dgvtn" Sep 11 04:43:50.362409 kubelet[2645]: I0911 04:43:50.362214 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-etc-cni-netd\") pod \"cilium-b262g\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " pod="kube-system/cilium-b262g" Sep 11 04:43:50.362409 kubelet[2645]: I0911 04:43:50.362253 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-xtables-lock\") pod \"cilium-b262g\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " pod="kube-system/cilium-b262g" Sep 11 04:43:50.362409 kubelet[2645]: I0911 04:43:50.362277 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-host-proc-sys-kernel\") pod \"cilium-b262g\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " pod="kube-system/cilium-b262g" Sep 11 04:43:50.362409 kubelet[2645]: I0911 04:43:50.362292 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2vnh\" (UniqueName: \"kubernetes.io/projected/1070ec9e-1959-4684-af5c-385736e842fd-kube-api-access-m2vnh\") pod \"cilium-b262g\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " pod="kube-system/cilium-b262g" Sep 11 04:43:50.362409 kubelet[2645]: I0911 04:43:50.362309 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-bpf-maps\") pod \"cilium-b262g\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " pod="kube-system/cilium-b262g" Sep 11 04:43:50.362409 kubelet[2645]: I0911 04:43:50.362326 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-cilium-cgroup\") pod \"cilium-b262g\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " pod="kube-system/cilium-b262g" Sep 11 04:43:50.362527 kubelet[2645]: I0911 04:43:50.362341 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-lib-modules\") pod \"cilium-b262g\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " pod="kube-system/cilium-b262g" Sep 11 04:43:50.362527 kubelet[2645]: I0911 04:43:50.362358 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1070ec9e-1959-4684-af5c-385736e842fd-cilium-config-path\") pod \"cilium-b262g\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " pod="kube-system/cilium-b262g" Sep 11 04:43:50.362527 kubelet[2645]: I0911 04:43:50.362374 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1070ec9e-1959-4684-af5c-385736e842fd-hubble-tls\") pod \"cilium-b262g\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " pod="kube-system/cilium-b262g" Sep 11 04:43:50.448242 kubelet[2645]: I0911 04:43:50.448185 2645 status_manager.go:890] "Failed to get status for pod" podUID="0889c8ab-4b7c-4c3f-83b7-063fa988d6af" pod="kube-system/cilium-operator-6c4d7847fc-k7bnj" err="pods \"cilium-operator-6c4d7847fc-k7bnj\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" Sep 11 04:43:50.453974 systemd[1]: Created slice kubepods-besteffort-pod0889c8ab_4b7c_4c3f_83b7_063fa988d6af.slice - libcontainer container kubepods-besteffort-pod0889c8ab_4b7c_4c3f_83b7_063fa988d6af.slice. Sep 11 04:43:50.462883 kubelet[2645]: I0911 04:43:50.462806 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0889c8ab-4b7c-4c3f-83b7-063fa988d6af-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-k7bnj\" (UID: \"0889c8ab-4b7c-4c3f-83b7-063fa988d6af\") " pod="kube-system/cilium-operator-6c4d7847fc-k7bnj" Sep 11 04:43:50.465463 kubelet[2645]: I0911 04:43:50.462988 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4j86d\" (UniqueName: \"kubernetes.io/projected/0889c8ab-4b7c-4c3f-83b7-063fa988d6af-kube-api-access-4j86d\") pod \"cilium-operator-6c4d7847fc-k7bnj\" (UID: \"0889c8ab-4b7c-4c3f-83b7-063fa988d6af\") " pod="kube-system/cilium-operator-6c4d7847fc-k7bnj" Sep 11 04:43:50.614966 kubelet[2645]: E0911 04:43:50.614923 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:50.615569 containerd[1511]: time="2025-09-11T04:43:50.615538018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dgvtn,Uid:85d2c59d-d76b-4220-b270-ab1a7e7b06dc,Namespace:kube-system,Attempt:0,}" Sep 11 04:43:50.619904 kubelet[2645]: E0911 04:43:50.619881 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:50.620861 containerd[1511]: time="2025-09-11T04:43:50.620796644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b262g,Uid:1070ec9e-1959-4684-af5c-385736e842fd,Namespace:kube-system,Attempt:0,}" Sep 11 04:43:50.632996 containerd[1511]: time="2025-09-11T04:43:50.632709570Z" level=info msg="connecting to shim 3c680d05644108c0a7e8775db81fd8cdb1017d03cfd58c4f35f91098faf15a3d" address="unix:///run/containerd/s/ac0e33b15c28ce39cbc85a120f3c1a0f6cde4de742eccf0024ec890199b6c15d" namespace=k8s.io protocol=ttrpc version=3 Sep 11 04:43:50.638187 containerd[1511]: time="2025-09-11T04:43:50.638102091Z" level=info msg="connecting to shim 752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd" address="unix:///run/containerd/s/d63c2f267602e5c3d4a191d6d024c7b4a623eda4783e910c6932b2b35da27572" namespace=k8s.io protocol=ttrpc version=3 Sep 11 04:43:50.655381 systemd[1]: Started cri-containerd-3c680d05644108c0a7e8775db81fd8cdb1017d03cfd58c4f35f91098faf15a3d.scope - libcontainer container 3c680d05644108c0a7e8775db81fd8cdb1017d03cfd58c4f35f91098faf15a3d. Sep 11 04:43:50.657996 systemd[1]: Started cri-containerd-752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd.scope - libcontainer container 752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd. Sep 11 04:43:50.684766 containerd[1511]: time="2025-09-11T04:43:50.684671076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b262g,Uid:1070ec9e-1959-4684-af5c-385736e842fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\"" Sep 11 04:43:50.685512 kubelet[2645]: E0911 04:43:50.685489 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:50.687557 containerd[1511]: time="2025-09-11T04:43:50.687355615Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 11 04:43:50.693351 containerd[1511]: time="2025-09-11T04:43:50.693320479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dgvtn,Uid:85d2c59d-d76b-4220-b270-ab1a7e7b06dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c680d05644108c0a7e8775db81fd8cdb1017d03cfd58c4f35f91098faf15a3d\"" Sep 11 04:43:50.693923 kubelet[2645]: E0911 04:43:50.693891 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:50.696083 containerd[1511]: time="2025-09-11T04:43:50.695959973Z" level=info msg="CreateContainer within sandbox \"3c680d05644108c0a7e8775db81fd8cdb1017d03cfd58c4f35f91098faf15a3d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 11 04:43:50.703585 containerd[1511]: time="2025-09-11T04:43:50.703549218Z" level=info msg="Container fed38887ab1683d35db94b18487918ff9be4b142b1cda060e95e75afc3fce7b2: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:43:50.710273 containerd[1511]: time="2025-09-11T04:43:50.710236162Z" level=info msg="CreateContainer within sandbox \"3c680d05644108c0a7e8775db81fd8cdb1017d03cfd58c4f35f91098faf15a3d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fed38887ab1683d35db94b18487918ff9be4b142b1cda060e95e75afc3fce7b2\"" Sep 11 04:43:50.711022 containerd[1511]: time="2025-09-11T04:43:50.710969164Z" level=info msg="StartContainer for \"fed38887ab1683d35db94b18487918ff9be4b142b1cda060e95e75afc3fce7b2\"" Sep 11 04:43:50.712489 containerd[1511]: time="2025-09-11T04:43:50.712391722Z" level=info msg="connecting to shim fed38887ab1683d35db94b18487918ff9be4b142b1cda060e95e75afc3fce7b2" address="unix:///run/containerd/s/ac0e33b15c28ce39cbc85a120f3c1a0f6cde4de742eccf0024ec890199b6c15d" protocol=ttrpc version=3 Sep 11 04:43:50.733392 systemd[1]: Started cri-containerd-fed38887ab1683d35db94b18487918ff9be4b142b1cda060e95e75afc3fce7b2.scope - libcontainer container fed38887ab1683d35db94b18487918ff9be4b142b1cda060e95e75afc3fce7b2. Sep 11 04:43:50.757661 kubelet[2645]: E0911 04:43:50.757634 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:50.757986 kubelet[2645]: E0911 04:43:50.757962 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:50.759026 containerd[1511]: time="2025-09-11T04:43:50.758998552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k7bnj,Uid:0889c8ab-4b7c-4c3f-83b7-063fa988d6af,Namespace:kube-system,Attempt:0,}" Sep 11 04:43:50.779278 containerd[1511]: time="2025-09-11T04:43:50.778806957Z" level=info msg="StartContainer for \"fed38887ab1683d35db94b18487918ff9be4b142b1cda060e95e75afc3fce7b2\" returns successfully" Sep 11 04:43:50.784079 containerd[1511]: time="2025-09-11T04:43:50.784047581Z" level=info msg="connecting to shim 44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015" address="unix:///run/containerd/s/359f0532da0902ced9c26f25d54112d30d515e24f5f269353f2a2f5a00503a17" namespace=k8s.io protocol=ttrpc version=3 Sep 11 04:43:50.811388 systemd[1]: Started cri-containerd-44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015.scope - libcontainer container 44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015. Sep 11 04:43:50.845674 containerd[1511]: time="2025-09-11T04:43:50.845633478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-k7bnj,Uid:0889c8ab-4b7c-4c3f-83b7-063fa988d6af,Namespace:kube-system,Attempt:0,} returns sandbox id \"44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015\"" Sep 11 04:43:50.847006 kubelet[2645]: E0911 04:43:50.846981 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:50.867254 kubelet[2645]: E0911 04:43:50.866380 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:50.867254 kubelet[2645]: E0911 04:43:50.866839 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:50.877396 kubelet[2645]: I0911 04:43:50.877192 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dgvtn" podStartSLOduration=0.87717815 podStartE2EDuration="877.17815ms" podCreationTimestamp="2025-09-11 04:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 04:43:50.87717703 +0000 UTC m=+8.123070196" watchObservedRunningTime="2025-09-11 04:43:50.87717815 +0000 UTC m=+8.123071316" Sep 11 04:43:51.868995 kubelet[2645]: E0911 04:43:51.868635 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:52.130631 kubelet[2645]: E0911 04:43:52.130308 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:52.870233 kubelet[2645]: E0911 04:43:52.870040 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:53.650658 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3676661330.mount: Deactivated successfully. Sep 11 04:43:55.067121 kubelet[2645]: E0911 04:43:55.067048 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:57.550839 update_engine[1480]: I20250911 04:43:57.550772 1480 update_attempter.cc:509] Updating boot flags... Sep 11 04:43:58.762893 containerd[1511]: time="2025-09-11T04:43:58.762836000Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:58.763326 containerd[1511]: time="2025-09-11T04:43:58.763295953Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 11 04:43:58.764112 containerd[1511]: time="2025-09-11T04:43:58.764073570Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:43:58.765950 containerd[1511]: time="2025-09-11T04:43:58.765916785Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.078523126s" Sep 11 04:43:58.766125 containerd[1511]: time="2025-09-11T04:43:58.766044394Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 11 04:43:58.776423 containerd[1511]: time="2025-09-11T04:43:58.776374669Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 11 04:43:58.784494 containerd[1511]: time="2025-09-11T04:43:58.784458339Z" level=info msg="CreateContainer within sandbox \"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 11 04:43:58.793882 containerd[1511]: time="2025-09-11T04:43:58.793844665Z" level=info msg="Container 720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:43:58.794279 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount549043780.mount: Deactivated successfully. Sep 11 04:43:58.806846 containerd[1511]: time="2025-09-11T04:43:58.799056526Z" level=info msg="CreateContainer within sandbox \"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8\"" Sep 11 04:43:58.810549 containerd[1511]: time="2025-09-11T04:43:58.810490921Z" level=info msg="StartContainer for \"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8\"" Sep 11 04:43:58.811939 containerd[1511]: time="2025-09-11T04:43:58.811754773Z" level=info msg="connecting to shim 720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8" address="unix:///run/containerd/s/d63c2f267602e5c3d4a191d6d024c7b4a623eda4783e910c6932b2b35da27572" protocol=ttrpc version=3 Sep 11 04:43:58.855453 systemd[1]: Started cri-containerd-720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8.scope - libcontainer container 720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8. Sep 11 04:43:58.892402 containerd[1511]: time="2025-09-11T04:43:58.892363902Z" level=info msg="StartContainer for \"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8\" returns successfully" Sep 11 04:43:58.905396 systemd[1]: cri-containerd-720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8.scope: Deactivated successfully. Sep 11 04:43:58.933168 containerd[1511]: time="2025-09-11T04:43:58.933054714Z" level=info msg="received exit event container_id:\"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8\" id:\"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8\" pid:3082 exited_at:{seconds:1757565838 nanos:925764662}" Sep 11 04:43:58.933340 containerd[1511]: time="2025-09-11T04:43:58.933129920Z" level=info msg="TaskExit event in podsandbox handler container_id:\"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8\" id:\"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8\" pid:3082 exited_at:{seconds:1757565838 nanos:925764662}" Sep 11 04:43:59.791633 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8-rootfs.mount: Deactivated successfully. Sep 11 04:43:59.890239 kubelet[2645]: E0911 04:43:59.890191 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:43:59.893701 containerd[1511]: time="2025-09-11T04:43:59.893172417Z" level=info msg="CreateContainer within sandbox \"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 11 04:43:59.907808 containerd[1511]: time="2025-09-11T04:43:59.907379204Z" level=info msg="Container 44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:43:59.913597 containerd[1511]: time="2025-09-11T04:43:59.913551393Z" level=info msg="CreateContainer within sandbox \"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd\"" Sep 11 04:43:59.914172 containerd[1511]: time="2025-09-11T04:43:59.914129914Z" level=info msg="StartContainer for \"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd\"" Sep 11 04:43:59.914936 containerd[1511]: time="2025-09-11T04:43:59.914890446Z" level=info msg="connecting to shim 44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd" address="unix:///run/containerd/s/d63c2f267602e5c3d4a191d6d024c7b4a623eda4783e910c6932b2b35da27572" protocol=ttrpc version=3 Sep 11 04:43:59.936429 systemd[1]: Started cri-containerd-44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd.scope - libcontainer container 44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd. Sep 11 04:43:59.978887 containerd[1511]: time="2025-09-11T04:43:59.978766487Z" level=info msg="StartContainer for \"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd\" returns successfully" Sep 11 04:43:59.983978 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 11 04:43:59.984178 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 11 04:43:59.984738 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 11 04:43:59.986059 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 11 04:43:59.991578 systemd[1]: cri-containerd-44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd.scope: Deactivated successfully. Sep 11 04:43:59.992409 containerd[1511]: time="2025-09-11T04:43:59.991927001Z" level=info msg="received exit event container_id:\"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd\" id:\"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd\" pid:3126 exited_at:{seconds:1757565839 nanos:991751469}" Sep 11 04:43:59.992409 containerd[1511]: time="2025-09-11T04:43:59.992122895Z" level=info msg="TaskExit event in podsandbox handler container_id:\"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd\" id:\"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd\" pid:3126 exited_at:{seconds:1757565839 nanos:991751469}" Sep 11 04:44:00.014516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 11 04:44:00.394028 containerd[1511]: time="2025-09-11T04:44:00.393972924Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:44:00.394593 containerd[1511]: time="2025-09-11T04:44:00.394566484Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 11 04:44:00.395252 containerd[1511]: time="2025-09-11T04:44:00.395214727Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 11 04:44:00.397013 containerd[1511]: time="2025-09-11T04:44:00.396795711Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.620367479s" Sep 11 04:44:00.397013 containerd[1511]: time="2025-09-11T04:44:00.396822953Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 11 04:44:00.399764 containerd[1511]: time="2025-09-11T04:44:00.399707344Z" level=info msg="CreateContainer within sandbox \"44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 11 04:44:00.408311 containerd[1511]: time="2025-09-11T04:44:00.408263230Z" level=info msg="Container 3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:44:00.413279 containerd[1511]: time="2025-09-11T04:44:00.413201957Z" level=info msg="CreateContainer within sandbox \"44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\"" Sep 11 04:44:00.413883 containerd[1511]: time="2025-09-11T04:44:00.413850840Z" level=info msg="StartContainer for \"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\"" Sep 11 04:44:00.414740 containerd[1511]: time="2025-09-11T04:44:00.414703177Z" level=info msg="connecting to shim 3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843" address="unix:///run/containerd/s/359f0532da0902ced9c26f25d54112d30d515e24f5f269353f2a2f5a00503a17" protocol=ttrpc version=3 Sep 11 04:44:00.432391 systemd[1]: Started cri-containerd-3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843.scope - libcontainer container 3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843. Sep 11 04:44:00.455004 containerd[1511]: time="2025-09-11T04:44:00.454967202Z" level=info msg="StartContainer for \"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\" returns successfully" Sep 11 04:44:00.792611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd-rootfs.mount: Deactivated successfully. Sep 11 04:44:00.893823 kubelet[2645]: E0911 04:44:00.893467 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:00.896199 kubelet[2645]: E0911 04:44:00.896161 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:00.898579 containerd[1511]: time="2025-09-11T04:44:00.898506400Z" level=info msg="CreateContainer within sandbox \"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 11 04:44:00.915662 containerd[1511]: time="2025-09-11T04:44:00.915624933Z" level=info msg="Container 1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:44:00.926953 kubelet[2645]: I0911 04:44:00.926886 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-k7bnj" podStartSLOduration=1.378471382 podStartE2EDuration="10.926870558s" podCreationTimestamp="2025-09-11 04:43:50 +0000 UTC" firstStartedPulling="2025-09-11 04:43:50.849046378 +0000 UTC m=+8.094939544" lastFinishedPulling="2025-09-11 04:44:00.397445594 +0000 UTC m=+17.643338720" observedRunningTime="2025-09-11 04:44:00.907197296 +0000 UTC m=+18.153090462" watchObservedRunningTime="2025-09-11 04:44:00.926870558 +0000 UTC m=+18.172763684" Sep 11 04:44:00.934485 containerd[1511]: time="2025-09-11T04:44:00.934446859Z" level=info msg="CreateContainer within sandbox \"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8\"" Sep 11 04:44:00.934976 containerd[1511]: time="2025-09-11T04:44:00.934952773Z" level=info msg="StartContainer for \"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8\"" Sep 11 04:44:00.937262 containerd[1511]: time="2025-09-11T04:44:00.936575040Z" level=info msg="connecting to shim 1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8" address="unix:///run/containerd/s/d63c2f267602e5c3d4a191d6d024c7b4a623eda4783e910c6932b2b35da27572" protocol=ttrpc version=3 Sep 11 04:44:00.961366 systemd[1]: Started cri-containerd-1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8.scope - libcontainer container 1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8. Sep 11 04:44:01.009598 containerd[1511]: time="2025-09-11T04:44:01.009555125Z" level=info msg="StartContainer for \"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8\" returns successfully" Sep 11 04:44:01.020291 systemd[1]: cri-containerd-1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8.scope: Deactivated successfully. Sep 11 04:44:01.030766 containerd[1511]: time="2025-09-11T04:44:01.030712219Z" level=info msg="received exit event container_id:\"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8\" id:\"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8\" pid:3227 exited_at:{seconds:1757565841 nanos:30434842}" Sep 11 04:44:01.030914 containerd[1511]: time="2025-09-11T04:44:01.030790664Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8\" id:\"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8\" pid:3227 exited_at:{seconds:1757565841 nanos:30434842}" Sep 11 04:44:01.055198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8-rootfs.mount: Deactivated successfully. Sep 11 04:44:01.901015 kubelet[2645]: E0911 04:44:01.900981 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:01.901750 kubelet[2645]: E0911 04:44:01.901113 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:01.904929 containerd[1511]: time="2025-09-11T04:44:01.904824038Z" level=info msg="CreateContainer within sandbox \"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 11 04:44:01.915858 containerd[1511]: time="2025-09-11T04:44:01.915822612Z" level=info msg="Container bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:44:01.922981 containerd[1511]: time="2025-09-11T04:44:01.922937060Z" level=info msg="CreateContainer within sandbox \"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a\"" Sep 11 04:44:01.923469 containerd[1511]: time="2025-09-11T04:44:01.923449453Z" level=info msg="StartContainer for \"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a\"" Sep 11 04:44:01.925000 containerd[1511]: time="2025-09-11T04:44:01.924921226Z" level=info msg="connecting to shim bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a" address="unix:///run/containerd/s/d63c2f267602e5c3d4a191d6d024c7b4a623eda4783e910c6932b2b35da27572" protocol=ttrpc version=3 Sep 11 04:44:01.943433 systemd[1]: Started cri-containerd-bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a.scope - libcontainer container bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a. Sep 11 04:44:01.973858 systemd[1]: cri-containerd-bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a.scope: Deactivated successfully. Sep 11 04:44:01.974453 containerd[1511]: time="2025-09-11T04:44:01.974421028Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a\" id:\"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a\" pid:3266 exited_at:{seconds:1757565841 nanos:974013042}" Sep 11 04:44:01.975245 containerd[1511]: time="2025-09-11T04:44:01.975204797Z" level=info msg="received exit event container_id:\"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a\" id:\"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a\" pid:3266 exited_at:{seconds:1757565841 nanos:974013042}" Sep 11 04:44:01.982205 containerd[1511]: time="2025-09-11T04:44:01.982165957Z" level=info msg="StartContainer for \"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a\" returns successfully" Sep 11 04:44:01.992388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a-rootfs.mount: Deactivated successfully. Sep 11 04:44:02.907458 kubelet[2645]: E0911 04:44:02.907425 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:02.910860 containerd[1511]: time="2025-09-11T04:44:02.910807401Z" level=info msg="CreateContainer within sandbox \"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 11 04:44:02.940305 containerd[1511]: time="2025-09-11T04:44:02.927191466Z" level=info msg="Container 6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:44:02.953177 containerd[1511]: time="2025-09-11T04:44:02.953123226Z" level=info msg="CreateContainer within sandbox \"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\"" Sep 11 04:44:02.954289 containerd[1511]: time="2025-09-11T04:44:02.954261535Z" level=info msg="StartContainer for \"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\"" Sep 11 04:44:02.955195 containerd[1511]: time="2025-09-11T04:44:02.955167789Z" level=info msg="connecting to shim 6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7" address="unix:///run/containerd/s/d63c2f267602e5c3d4a191d6d024c7b4a623eda4783e910c6932b2b35da27572" protocol=ttrpc version=3 Sep 11 04:44:02.975366 systemd[1]: Started cri-containerd-6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7.scope - libcontainer container 6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7. Sep 11 04:44:03.002165 containerd[1511]: time="2025-09-11T04:44:03.002075569Z" level=info msg="StartContainer for \"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\" returns successfully" Sep 11 04:44:03.077695 containerd[1511]: time="2025-09-11T04:44:03.077641508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\" id:\"7be64e318e368f1497e84a2890b5e4e1d7f96fe588a73d7d5a75d64cca45ab59\" pid:3336 exited_at:{seconds:1757565843 nanos:77366293}" Sep 11 04:44:03.141367 kubelet[2645]: I0911 04:44:03.140891 2645 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 11 04:44:03.189573 systemd[1]: Created slice kubepods-burstable-pod6e0bf603_6e09_41a2_905e_0fd72362d2b1.slice - libcontainer container kubepods-burstable-pod6e0bf603_6e09_41a2_905e_0fd72362d2b1.slice. Sep 11 04:44:03.197039 systemd[1]: Created slice kubepods-burstable-pod2340c132_4733_498f_8473_85deba60875d.slice - libcontainer container kubepods-burstable-pod2340c132_4733_498f_8473_85deba60875d.slice. Sep 11 04:44:03.264705 kubelet[2645]: I0911 04:44:03.264665 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2340c132-4733-498f-8473-85deba60875d-config-volume\") pod \"coredns-668d6bf9bc-5lgl2\" (UID: \"2340c132-4733-498f-8473-85deba60875d\") " pod="kube-system/coredns-668d6bf9bc-5lgl2" Sep 11 04:44:03.264705 kubelet[2645]: I0911 04:44:03.264710 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqn4m\" (UniqueName: \"kubernetes.io/projected/2340c132-4733-498f-8473-85deba60875d-kube-api-access-jqn4m\") pod \"coredns-668d6bf9bc-5lgl2\" (UID: \"2340c132-4733-498f-8473-85deba60875d\") " pod="kube-system/coredns-668d6bf9bc-5lgl2" Sep 11 04:44:03.264869 kubelet[2645]: I0911 04:44:03.264732 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6e0bf603-6e09-41a2-905e-0fd72362d2b1-config-volume\") pod \"coredns-668d6bf9bc-bsk89\" (UID: \"6e0bf603-6e09-41a2-905e-0fd72362d2b1\") " pod="kube-system/coredns-668d6bf9bc-bsk89" Sep 11 04:44:03.264869 kubelet[2645]: I0911 04:44:03.264757 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxz5v\" (UniqueName: \"kubernetes.io/projected/6e0bf603-6e09-41a2-905e-0fd72362d2b1-kube-api-access-bxz5v\") pod \"coredns-668d6bf9bc-bsk89\" (UID: \"6e0bf603-6e09-41a2-905e-0fd72362d2b1\") " pod="kube-system/coredns-668d6bf9bc-bsk89" Sep 11 04:44:03.495649 kubelet[2645]: E0911 04:44:03.495523 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:03.497529 containerd[1511]: time="2025-09-11T04:44:03.496195504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bsk89,Uid:6e0bf603-6e09-41a2-905e-0fd72362d2b1,Namespace:kube-system,Attempt:0,}" Sep 11 04:44:03.500095 kubelet[2645]: E0911 04:44:03.500043 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:03.500984 containerd[1511]: time="2025-09-11T04:44:03.500637879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5lgl2,Uid:2340c132-4733-498f-8473-85deba60875d,Namespace:kube-system,Attempt:0,}" Sep 11 04:44:03.913802 kubelet[2645]: E0911 04:44:03.913696 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:03.929058 kubelet[2645]: I0911 04:44:03.929004 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b262g" podStartSLOduration=5.840755613 podStartE2EDuration="13.928988718s" podCreationTimestamp="2025-09-11 04:43:50 +0000 UTC" firstStartedPulling="2025-09-11 04:43:50.686614332 +0000 UTC m=+7.932507498" lastFinishedPulling="2025-09-11 04:43:58.774847437 +0000 UTC m=+16.020740603" observedRunningTime="2025-09-11 04:44:03.928808588 +0000 UTC m=+21.174701754" watchObservedRunningTime="2025-09-11 04:44:03.928988718 +0000 UTC m=+21.174881884" Sep 11 04:44:04.915269 kubelet[2645]: E0911 04:44:04.915209 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:05.038120 systemd-networkd[1417]: cilium_host: Link UP Sep 11 04:44:05.038276 systemd-networkd[1417]: cilium_net: Link UP Sep 11 04:44:05.038421 systemd-networkd[1417]: cilium_net: Gained carrier Sep 11 04:44:05.038532 systemd-networkd[1417]: cilium_host: Gained carrier Sep 11 04:44:05.109526 systemd-networkd[1417]: cilium_vxlan: Link UP Sep 11 04:44:05.109649 systemd-networkd[1417]: cilium_vxlan: Gained carrier Sep 11 04:44:05.360260 kernel: NET: Registered PF_ALG protocol family Sep 11 04:44:05.661396 systemd-networkd[1417]: cilium_host: Gained IPv6LL Sep 11 04:44:05.725468 systemd-networkd[1417]: cilium_net: Gained IPv6LL Sep 11 04:44:05.920742 kubelet[2645]: E0911 04:44:05.920651 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:05.954438 systemd-networkd[1417]: lxc_health: Link UP Sep 11 04:44:05.954702 systemd-networkd[1417]: lxc_health: Gained carrier Sep 11 04:44:06.545599 systemd-networkd[1417]: lxcf7ed530a083d: Link UP Sep 11 04:44:06.553238 kernel: eth0: renamed from tmpbd3ba Sep 11 04:44:06.555090 systemd-networkd[1417]: lxced9cfa39d2a6: Link UP Sep 11 04:44:06.555321 systemd-networkd[1417]: lxcf7ed530a083d: Gained carrier Sep 11 04:44:06.556247 kernel: eth0: renamed from tmp44dc7 Sep 11 04:44:06.558348 systemd-networkd[1417]: lxced9cfa39d2a6: Gained carrier Sep 11 04:44:06.749530 systemd-networkd[1417]: cilium_vxlan: Gained IPv6LL Sep 11 04:44:06.922311 kubelet[2645]: E0911 04:44:06.922286 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:07.645422 systemd-networkd[1417]: lxced9cfa39d2a6: Gained IPv6LL Sep 11 04:44:07.646337 systemd-networkd[1417]: lxc_health: Gained IPv6LL Sep 11 04:44:08.477428 systemd-networkd[1417]: lxcf7ed530a083d: Gained IPv6LL Sep 11 04:44:10.057785 containerd[1511]: time="2025-09-11T04:44:10.057706322Z" level=info msg="connecting to shim 44dc7ca84c419cce073a0e5203ffecaea788b56b230aba99c476e725663767bf" address="unix:///run/containerd/s/ddd6420968607ef8f7ba8de0fd40b8a3f5bf97ad7f5fad6cc0e1288618adbfac" namespace=k8s.io protocol=ttrpc version=3 Sep 11 04:44:10.060242 containerd[1511]: time="2025-09-11T04:44:10.059833172Z" level=info msg="connecting to shim bd3ba97b7609fd5493e6847c21e07dbb997eb3c2a8e395297aff21d38e44579b" address="unix:///run/containerd/s/16e555d3459a132f50dc9e0c28f4176173a79451aa0ba793defa86ca2c11b6bf" namespace=k8s.io protocol=ttrpc version=3 Sep 11 04:44:10.088418 systemd[1]: Started cri-containerd-44dc7ca84c419cce073a0e5203ffecaea788b56b230aba99c476e725663767bf.scope - libcontainer container 44dc7ca84c419cce073a0e5203ffecaea788b56b230aba99c476e725663767bf. Sep 11 04:44:10.092107 systemd[1]: Started cri-containerd-bd3ba97b7609fd5493e6847c21e07dbb997eb3c2a8e395297aff21d38e44579b.scope - libcontainer container bd3ba97b7609fd5493e6847c21e07dbb997eb3c2a8e395297aff21d38e44579b. Sep 11 04:44:10.105013 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 04:44:10.110628 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 11 04:44:10.130593 containerd[1511]: time="2025-09-11T04:44:10.130550739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-bsk89,Uid:6e0bf603-6e09-41a2-905e-0fd72362d2b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"44dc7ca84c419cce073a0e5203ffecaea788b56b230aba99c476e725663767bf\"" Sep 11 04:44:10.133325 containerd[1511]: time="2025-09-11T04:44:10.133294655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-5lgl2,Uid:2340c132-4733-498f-8473-85deba60875d,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd3ba97b7609fd5493e6847c21e07dbb997eb3c2a8e395297aff21d38e44579b\"" Sep 11 04:44:10.134057 kubelet[2645]: E0911 04:44:10.134026 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:10.134467 kubelet[2645]: E0911 04:44:10.134041 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:10.142826 containerd[1511]: time="2025-09-11T04:44:10.142795419Z" level=info msg="CreateContainer within sandbox \"44dc7ca84c419cce073a0e5203ffecaea788b56b230aba99c476e725663767bf\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 04:44:10.144160 containerd[1511]: time="2025-09-11T04:44:10.143883465Z" level=info msg="CreateContainer within sandbox \"bd3ba97b7609fd5493e6847c21e07dbb997eb3c2a8e395297aff21d38e44579b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 11 04:44:10.155590 containerd[1511]: time="2025-09-11T04:44:10.155397955Z" level=info msg="Container bb0c92a252cc30573253cc0c59e368ff74ec155ac3dd9263cb6fd5a4785b37a9: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:44:10.159799 containerd[1511]: time="2025-09-11T04:44:10.159772221Z" level=info msg="Container 4b9471e9da0fdbf22b14d482c2690af89cf0b81c391fc83f76b086e0f9f7af71: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:44:10.163354 containerd[1511]: time="2025-09-11T04:44:10.163323932Z" level=info msg="CreateContainer within sandbox \"bd3ba97b7609fd5493e6847c21e07dbb997eb3c2a8e395297aff21d38e44579b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb0c92a252cc30573253cc0c59e368ff74ec155ac3dd9263cb6fd5a4785b37a9\"" Sep 11 04:44:10.164043 containerd[1511]: time="2025-09-11T04:44:10.164014641Z" level=info msg="StartContainer for \"bb0c92a252cc30573253cc0c59e368ff74ec155ac3dd9263cb6fd5a4785b37a9\"" Sep 11 04:44:10.165064 containerd[1511]: time="2025-09-11T04:44:10.165028844Z" level=info msg="connecting to shim bb0c92a252cc30573253cc0c59e368ff74ec155ac3dd9263cb6fd5a4785b37a9" address="unix:///run/containerd/s/16e555d3459a132f50dc9e0c28f4176173a79451aa0ba793defa86ca2c11b6bf" protocol=ttrpc version=3 Sep 11 04:44:10.166182 containerd[1511]: time="2025-09-11T04:44:10.165849759Z" level=info msg="CreateContainer within sandbox \"44dc7ca84c419cce073a0e5203ffecaea788b56b230aba99c476e725663767bf\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4b9471e9da0fdbf22b14d482c2690af89cf0b81c391fc83f76b086e0f9f7af71\"" Sep 11 04:44:10.166895 containerd[1511]: time="2025-09-11T04:44:10.166866602Z" level=info msg="StartContainer for \"4b9471e9da0fdbf22b14d482c2690af89cf0b81c391fc83f76b086e0f9f7af71\"" Sep 11 04:44:10.167922 containerd[1511]: time="2025-09-11T04:44:10.167888886Z" level=info msg="connecting to shim 4b9471e9da0fdbf22b14d482c2690af89cf0b81c391fc83f76b086e0f9f7af71" address="unix:///run/containerd/s/ddd6420968607ef8f7ba8de0fd40b8a3f5bf97ad7f5fad6cc0e1288618adbfac" protocol=ttrpc version=3 Sep 11 04:44:10.184380 systemd[1]: Started cri-containerd-bb0c92a252cc30573253cc0c59e368ff74ec155ac3dd9263cb6fd5a4785b37a9.scope - libcontainer container bb0c92a252cc30573253cc0c59e368ff74ec155ac3dd9263cb6fd5a4785b37a9. Sep 11 04:44:10.188164 systemd[1]: Started cri-containerd-4b9471e9da0fdbf22b14d482c2690af89cf0b81c391fc83f76b086e0f9f7af71.scope - libcontainer container 4b9471e9da0fdbf22b14d482c2690af89cf0b81c391fc83f76b086e0f9f7af71. Sep 11 04:44:10.217518 containerd[1511]: time="2025-09-11T04:44:10.217465193Z" level=info msg="StartContainer for \"bb0c92a252cc30573253cc0c59e368ff74ec155ac3dd9263cb6fd5a4785b37a9\" returns successfully" Sep 11 04:44:10.231910 containerd[1511]: time="2025-09-11T04:44:10.231802403Z" level=info msg="StartContainer for \"4b9471e9da0fdbf22b14d482c2690af89cf0b81c391fc83f76b086e0f9f7af71\" returns successfully" Sep 11 04:44:10.657337 systemd[1]: Started sshd@7-10.0.0.77:22-10.0.0.1:40354.service - OpenSSH per-connection server daemon (10.0.0.1:40354). Sep 11 04:44:10.709830 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 40354 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:10.711052 sshd-session[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:10.715282 systemd-logind[1476]: New session 8 of user core. Sep 11 04:44:10.725357 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 11 04:44:10.848134 sshd[3984]: Connection closed by 10.0.0.1 port 40354 Sep 11 04:44:10.848559 sshd-session[3981]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:10.854603 systemd-logind[1476]: Session 8 logged out. Waiting for processes to exit. Sep 11 04:44:10.854771 systemd[1]: sshd@7-10.0.0.77:22-10.0.0.1:40354.service: Deactivated successfully. Sep 11 04:44:10.856433 systemd[1]: session-8.scope: Deactivated successfully. Sep 11 04:44:10.858421 systemd-logind[1476]: Removed session 8. Sep 11 04:44:10.934430 kubelet[2645]: E0911 04:44:10.934186 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:10.934765 kubelet[2645]: E0911 04:44:10.934748 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:10.944242 kubelet[2645]: I0911 04:44:10.943874 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5lgl2" podStartSLOduration=20.943859432 podStartE2EDuration="20.943859432s" podCreationTimestamp="2025-09-11 04:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 04:44:10.943358171 +0000 UTC m=+28.189251337" watchObservedRunningTime="2025-09-11 04:44:10.943859432 +0000 UTC m=+28.189752558" Sep 11 04:44:10.968841 kubelet[2645]: I0911 04:44:10.968789 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-bsk89" podStartSLOduration=20.968771612 podStartE2EDuration="20.968771612s" podCreationTimestamp="2025-09-11 04:43:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 04:44:10.958911272 +0000 UTC m=+28.204804478" watchObservedRunningTime="2025-09-11 04:44:10.968771612 +0000 UTC m=+28.214664778" Sep 11 04:44:11.936266 kubelet[2645]: E0911 04:44:11.936202 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:11.937292 kubelet[2645]: E0911 04:44:11.937256 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:12.937711 kubelet[2645]: E0911 04:44:12.937670 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:12.938069 kubelet[2645]: E0911 04:44:12.937748 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:15.864469 systemd[1]: Started sshd@8-10.0.0.77:22-10.0.0.1:40370.service - OpenSSH per-connection server daemon (10.0.0.1:40370). Sep 11 04:44:15.904897 sshd[4006]: Accepted publickey for core from 10.0.0.1 port 40370 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:15.905932 sshd-session[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:15.910997 systemd-logind[1476]: New session 9 of user core. Sep 11 04:44:15.920379 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 11 04:44:16.033360 sshd[4009]: Connection closed by 10.0.0.1 port 40370 Sep 11 04:44:16.033902 sshd-session[4006]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:16.037271 systemd[1]: sshd@8-10.0.0.77:22-10.0.0.1:40370.service: Deactivated successfully. Sep 11 04:44:16.039144 systemd[1]: session-9.scope: Deactivated successfully. Sep 11 04:44:16.039995 systemd-logind[1476]: Session 9 logged out. Waiting for processes to exit. Sep 11 04:44:16.042500 systemd-logind[1476]: Removed session 9. Sep 11 04:44:17.644745 kubelet[2645]: I0911 04:44:17.644669 2645 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 11 04:44:17.645213 kubelet[2645]: E0911 04:44:17.645178 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:17.945868 kubelet[2645]: E0911 04:44:17.945770 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:44:21.048846 systemd[1]: Started sshd@9-10.0.0.77:22-10.0.0.1:58472.service - OpenSSH per-connection server daemon (10.0.0.1:58472). Sep 11 04:44:21.095387 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 58472 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:21.096165 sshd-session[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:21.100983 systemd-logind[1476]: New session 10 of user core. Sep 11 04:44:21.114535 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 11 04:44:21.240779 sshd[4031]: Connection closed by 10.0.0.1 port 58472 Sep 11 04:44:21.241185 sshd-session[4028]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:21.244638 systemd[1]: sshd@9-10.0.0.77:22-10.0.0.1:58472.service: Deactivated successfully. Sep 11 04:44:21.247082 systemd[1]: session-10.scope: Deactivated successfully. Sep 11 04:44:21.249845 systemd-logind[1476]: Session 10 logged out. Waiting for processes to exit. Sep 11 04:44:21.251302 systemd-logind[1476]: Removed session 10. Sep 11 04:44:26.252439 systemd[1]: Started sshd@10-10.0.0.77:22-10.0.0.1:58482.service - OpenSSH per-connection server daemon (10.0.0.1:58482). Sep 11 04:44:26.304837 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 58482 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:26.305818 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:26.310764 systemd-logind[1476]: New session 11 of user core. Sep 11 04:44:26.321359 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 11 04:44:26.437109 sshd[4049]: Connection closed by 10.0.0.1 port 58482 Sep 11 04:44:26.437438 sshd-session[4046]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:26.441022 systemd[1]: sshd@10-10.0.0.77:22-10.0.0.1:58482.service: Deactivated successfully. Sep 11 04:44:26.443774 systemd[1]: session-11.scope: Deactivated successfully. Sep 11 04:44:26.444412 systemd-logind[1476]: Session 11 logged out. Waiting for processes to exit. Sep 11 04:44:26.445309 systemd-logind[1476]: Removed session 11. Sep 11 04:44:31.460353 systemd[1]: Started sshd@11-10.0.0.77:22-10.0.0.1:51676.service - OpenSSH per-connection server daemon (10.0.0.1:51676). Sep 11 04:44:31.502844 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 51676 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:31.503940 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:31.507518 systemd-logind[1476]: New session 12 of user core. Sep 11 04:44:31.517384 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 11 04:44:31.626708 sshd[4066]: Connection closed by 10.0.0.1 port 51676 Sep 11 04:44:31.627007 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:31.638117 systemd[1]: sshd@11-10.0.0.77:22-10.0.0.1:51676.service: Deactivated successfully. Sep 11 04:44:31.639818 systemd[1]: session-12.scope: Deactivated successfully. Sep 11 04:44:31.640655 systemd-logind[1476]: Session 12 logged out. Waiting for processes to exit. Sep 11 04:44:31.643459 systemd[1]: Started sshd@12-10.0.0.77:22-10.0.0.1:51684.service - OpenSSH per-connection server daemon (10.0.0.1:51684). Sep 11 04:44:31.644116 systemd-logind[1476]: Removed session 12. Sep 11 04:44:31.694416 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 51684 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:31.695419 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:31.698917 systemd-logind[1476]: New session 13 of user core. Sep 11 04:44:31.709362 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 11 04:44:31.856945 sshd[4083]: Connection closed by 10.0.0.1 port 51684 Sep 11 04:44:31.856866 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:31.871236 systemd[1]: sshd@12-10.0.0.77:22-10.0.0.1:51684.service: Deactivated successfully. Sep 11 04:44:31.873979 systemd[1]: session-13.scope: Deactivated successfully. Sep 11 04:44:31.876572 systemd-logind[1476]: Session 13 logged out. Waiting for processes to exit. Sep 11 04:44:31.881960 systemd[1]: Started sshd@13-10.0.0.77:22-10.0.0.1:51698.service - OpenSSH per-connection server daemon (10.0.0.1:51698). Sep 11 04:44:31.883268 systemd-logind[1476]: Removed session 13. Sep 11 04:44:31.937083 sshd[4095]: Accepted publickey for core from 10.0.0.1 port 51698 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:31.938331 sshd-session[4095]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:31.942265 systemd-logind[1476]: New session 14 of user core. Sep 11 04:44:31.956412 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 11 04:44:32.071950 sshd[4098]: Connection closed by 10.0.0.1 port 51698 Sep 11 04:44:32.072426 sshd-session[4095]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:32.075727 systemd-logind[1476]: Session 14 logged out. Waiting for processes to exit. Sep 11 04:44:32.076379 systemd[1]: sshd@13-10.0.0.77:22-10.0.0.1:51698.service: Deactivated successfully. Sep 11 04:44:32.079624 systemd[1]: session-14.scope: Deactivated successfully. Sep 11 04:44:32.080966 systemd-logind[1476]: Removed session 14. Sep 11 04:44:37.085479 systemd[1]: Started sshd@14-10.0.0.77:22-10.0.0.1:51708.service - OpenSSH per-connection server daemon (10.0.0.1:51708). Sep 11 04:44:37.147029 sshd[4112]: Accepted publickey for core from 10.0.0.1 port 51708 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:37.148106 sshd-session[4112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:37.151623 systemd-logind[1476]: New session 15 of user core. Sep 11 04:44:37.163449 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 11 04:44:37.274941 sshd[4115]: Connection closed by 10.0.0.1 port 51708 Sep 11 04:44:37.275546 sshd-session[4112]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:37.287604 systemd[1]: sshd@14-10.0.0.77:22-10.0.0.1:51708.service: Deactivated successfully. Sep 11 04:44:37.289342 systemd[1]: session-15.scope: Deactivated successfully. Sep 11 04:44:37.290035 systemd-logind[1476]: Session 15 logged out. Waiting for processes to exit. Sep 11 04:44:37.292392 systemd[1]: Started sshd@15-10.0.0.77:22-10.0.0.1:51712.service - OpenSSH per-connection server daemon (10.0.0.1:51712). Sep 11 04:44:37.292884 systemd-logind[1476]: Removed session 15. Sep 11 04:44:37.344563 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 51712 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:37.345803 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:37.349718 systemd-logind[1476]: New session 16 of user core. Sep 11 04:44:37.356371 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 11 04:44:37.713386 sshd[4131]: Connection closed by 10.0.0.1 port 51712 Sep 11 04:44:37.713565 sshd-session[4128]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:37.724323 systemd[1]: sshd@15-10.0.0.77:22-10.0.0.1:51712.service: Deactivated successfully. Sep 11 04:44:37.725974 systemd[1]: session-16.scope: Deactivated successfully. Sep 11 04:44:37.726680 systemd-logind[1476]: Session 16 logged out. Waiting for processes to exit. Sep 11 04:44:37.728987 systemd[1]: Started sshd@16-10.0.0.77:22-10.0.0.1:51716.service - OpenSSH per-connection server daemon (10.0.0.1:51716). Sep 11 04:44:37.729747 systemd-logind[1476]: Removed session 16. Sep 11 04:44:37.780745 sshd[4142]: Accepted publickey for core from 10.0.0.1 port 51716 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:37.782056 sshd-session[4142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:37.786095 systemd-logind[1476]: New session 17 of user core. Sep 11 04:44:37.792378 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 11 04:44:38.388015 sshd[4145]: Connection closed by 10.0.0.1 port 51716 Sep 11 04:44:38.388396 sshd-session[4142]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:38.397375 systemd[1]: sshd@16-10.0.0.77:22-10.0.0.1:51716.service: Deactivated successfully. Sep 11 04:44:38.401520 systemd[1]: session-17.scope: Deactivated successfully. Sep 11 04:44:38.403804 systemd-logind[1476]: Session 17 logged out. Waiting for processes to exit. Sep 11 04:44:38.407499 systemd[1]: Started sshd@17-10.0.0.77:22-10.0.0.1:51724.service - OpenSSH per-connection server daemon (10.0.0.1:51724). Sep 11 04:44:38.409829 systemd-logind[1476]: Removed session 17. Sep 11 04:44:38.457611 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 51724 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:38.458661 sshd-session[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:38.462251 systemd-logind[1476]: New session 18 of user core. Sep 11 04:44:38.479415 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 11 04:44:38.687542 sshd[4167]: Connection closed by 10.0.0.1 port 51724 Sep 11 04:44:38.687503 sshd-session[4164]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:38.697661 systemd[1]: sshd@17-10.0.0.77:22-10.0.0.1:51724.service: Deactivated successfully. Sep 11 04:44:38.699952 systemd[1]: session-18.scope: Deactivated successfully. Sep 11 04:44:38.702255 systemd-logind[1476]: Session 18 logged out. Waiting for processes to exit. Sep 11 04:44:38.705900 systemd[1]: Started sshd@18-10.0.0.77:22-10.0.0.1:51734.service - OpenSSH per-connection server daemon (10.0.0.1:51734). Sep 11 04:44:38.706923 systemd-logind[1476]: Removed session 18. Sep 11 04:44:38.763785 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 51734 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:38.764828 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:38.768415 systemd-logind[1476]: New session 19 of user core. Sep 11 04:44:38.783363 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 11 04:44:38.898879 sshd[4182]: Connection closed by 10.0.0.1 port 51734 Sep 11 04:44:38.899417 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:38.902773 systemd[1]: sshd@18-10.0.0.77:22-10.0.0.1:51734.service: Deactivated successfully. Sep 11 04:44:38.904448 systemd[1]: session-19.scope: Deactivated successfully. Sep 11 04:44:38.905095 systemd-logind[1476]: Session 19 logged out. Waiting for processes to exit. Sep 11 04:44:38.906098 systemd-logind[1476]: Removed session 19. Sep 11 04:44:43.910267 systemd[1]: Started sshd@19-10.0.0.77:22-10.0.0.1:56422.service - OpenSSH per-connection server daemon (10.0.0.1:56422). Sep 11 04:44:43.969000 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 56422 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:43.970067 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:43.973984 systemd-logind[1476]: New session 20 of user core. Sep 11 04:44:43.994376 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 11 04:44:44.101874 sshd[4203]: Connection closed by 10.0.0.1 port 56422 Sep 11 04:44:44.102184 sshd-session[4200]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:44.105397 systemd[1]: sshd@19-10.0.0.77:22-10.0.0.1:56422.service: Deactivated successfully. Sep 11 04:44:44.106994 systemd[1]: session-20.scope: Deactivated successfully. Sep 11 04:44:44.107683 systemd-logind[1476]: Session 20 logged out. Waiting for processes to exit. Sep 11 04:44:44.109019 systemd-logind[1476]: Removed session 20. Sep 11 04:44:49.113189 systemd[1]: Started sshd@20-10.0.0.77:22-10.0.0.1:56438.service - OpenSSH per-connection server daemon (10.0.0.1:56438). Sep 11 04:44:49.161237 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 56438 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:49.162254 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:49.165844 systemd-logind[1476]: New session 21 of user core. Sep 11 04:44:49.181378 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 11 04:44:49.285148 sshd[4220]: Connection closed by 10.0.0.1 port 56438 Sep 11 04:44:49.285635 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:49.288899 systemd[1]: sshd@20-10.0.0.77:22-10.0.0.1:56438.service: Deactivated successfully. Sep 11 04:44:49.291553 systemd[1]: session-21.scope: Deactivated successfully. Sep 11 04:44:49.292334 systemd-logind[1476]: Session 21 logged out. Waiting for processes to exit. Sep 11 04:44:49.293759 systemd-logind[1476]: Removed session 21. Sep 11 04:44:54.296162 systemd[1]: Started sshd@21-10.0.0.77:22-10.0.0.1:39980.service - OpenSSH per-connection server daemon (10.0.0.1:39980). Sep 11 04:44:54.360324 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 39980 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:54.362068 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:54.370557 systemd-logind[1476]: New session 22 of user core. Sep 11 04:44:54.384491 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 11 04:44:54.496825 sshd[4240]: Connection closed by 10.0.0.1 port 39980 Sep 11 04:44:54.497357 sshd-session[4237]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:54.508630 systemd[1]: sshd@21-10.0.0.77:22-10.0.0.1:39980.service: Deactivated successfully. Sep 11 04:44:54.510324 systemd[1]: session-22.scope: Deactivated successfully. Sep 11 04:44:54.511127 systemd-logind[1476]: Session 22 logged out. Waiting for processes to exit. Sep 11 04:44:54.515446 systemd[1]: Started sshd@22-10.0.0.77:22-10.0.0.1:39982.service - OpenSSH per-connection server daemon (10.0.0.1:39982). Sep 11 04:44:54.516008 systemd-logind[1476]: Removed session 22. Sep 11 04:44:54.563810 sshd[4253]: Accepted publickey for core from 10.0.0.1 port 39982 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:54.565563 sshd-session[4253]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:54.570610 systemd-logind[1476]: New session 23 of user core. Sep 11 04:44:54.580340 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 11 04:44:57.140698 containerd[1511]: time="2025-09-11T04:44:57.140557372Z" level=info msg="StopContainer for \"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\" with timeout 30 (s)" Sep 11 04:44:57.144612 containerd[1511]: time="2025-09-11T04:44:57.144499537Z" level=info msg="Stop container \"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\" with signal terminated" Sep 11 04:44:57.159890 systemd[1]: cri-containerd-3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843.scope: Deactivated successfully. Sep 11 04:44:57.161581 containerd[1511]: time="2025-09-11T04:44:57.161487762Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\" id:\"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\" pid:3193 exited_at:{seconds:1757565897 nanos:160786281}" Sep 11 04:44:57.161581 containerd[1511]: time="2025-09-11T04:44:57.161505962Z" level=info msg="received exit event container_id:\"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\" id:\"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\" pid:3193 exited_at:{seconds:1757565897 nanos:160786281}" Sep 11 04:44:57.181176 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843-rootfs.mount: Deactivated successfully. Sep 11 04:44:57.188631 containerd[1511]: time="2025-09-11T04:44:57.188581801Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\" id:\"d3fdb4bf8c37532b6c89e6f42f646354aa365d7334047f89aeca7c34e97ef57f\" pid:4285 exited_at:{seconds:1757565897 nanos:188286241}" Sep 11 04:44:57.190251 containerd[1511]: time="2025-09-11T04:44:57.190185563Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 11 04:44:57.193323 containerd[1511]: time="2025-09-11T04:44:57.193293848Z" level=info msg="StopContainer for \"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\" with timeout 2 (s)" Sep 11 04:44:57.193663 containerd[1511]: time="2025-09-11T04:44:57.193635768Z" level=info msg="Stop container \"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\" with signal terminated" Sep 11 04:44:57.194891 containerd[1511]: time="2025-09-11T04:44:57.194711610Z" level=info msg="StopContainer for \"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\" returns successfully" Sep 11 04:44:57.197432 containerd[1511]: time="2025-09-11T04:44:57.197389774Z" level=info msg="StopPodSandbox for \"44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015\"" Sep 11 04:44:57.200395 systemd-networkd[1417]: lxc_health: Link DOWN Sep 11 04:44:57.200402 systemd-networkd[1417]: lxc_health: Lost carrier Sep 11 04:44:57.202283 containerd[1511]: time="2025-09-11T04:44:57.202194341Z" level=info msg="Container to stop \"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 04:44:57.211139 systemd[1]: cri-containerd-44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015.scope: Deactivated successfully. Sep 11 04:44:57.213458 containerd[1511]: time="2025-09-11T04:44:57.213398917Z" level=info msg="TaskExit event in podsandbox handler container_id:\"44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015\" id:\"44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015\" pid:2881 exit_status:137 exited_at:{seconds:1757565897 nanos:213040076}" Sep 11 04:44:57.216895 systemd[1]: cri-containerd-6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7.scope: Deactivated successfully. Sep 11 04:44:57.217251 systemd[1]: cri-containerd-6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7.scope: Consumed 6.116s CPU time, 123M memory peak, 136K read from disk, 14.3M written to disk. Sep 11 04:44:57.219654 containerd[1511]: time="2025-09-11T04:44:57.219530366Z" level=info msg="received exit event container_id:\"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\" id:\"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\" pid:3304 exited_at:{seconds:1757565897 nanos:219354525}" Sep 11 04:44:57.240928 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7-rootfs.mount: Deactivated successfully. Sep 11 04:44:57.246304 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015-rootfs.mount: Deactivated successfully. Sep 11 04:44:57.251404 containerd[1511]: time="2025-09-11T04:44:57.251328331Z" level=info msg="shim disconnected" id=44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015 namespace=k8s.io Sep 11 04:44:57.251628 containerd[1511]: time="2025-09-11T04:44:57.251399731Z" level=warning msg="cleaning up after shim disconnected" id=44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015 namespace=k8s.io Sep 11 04:44:57.251628 containerd[1511]: time="2025-09-11T04:44:57.251430291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 11 04:44:57.253702 containerd[1511]: time="2025-09-11T04:44:57.253498254Z" level=info msg="StopContainer for \"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\" returns successfully" Sep 11 04:44:57.254794 containerd[1511]: time="2025-09-11T04:44:57.254745856Z" level=info msg="StopPodSandbox for \"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\"" Sep 11 04:44:57.254868 containerd[1511]: time="2025-09-11T04:44:57.254835176Z" level=info msg="Container to stop \"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 04:44:57.254868 containerd[1511]: time="2025-09-11T04:44:57.254858656Z" level=info msg="Container to stop \"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 04:44:57.254914 containerd[1511]: time="2025-09-11T04:44:57.254868456Z" level=info msg="Container to stop \"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 04:44:57.254914 containerd[1511]: time="2025-09-11T04:44:57.254877656Z" level=info msg="Container to stop \"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 04:44:57.254914 containerd[1511]: time="2025-09-11T04:44:57.254885496Z" level=info msg="Container to stop \"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 11 04:44:57.260682 systemd[1]: cri-containerd-752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd.scope: Deactivated successfully. Sep 11 04:44:57.268261 containerd[1511]: time="2025-09-11T04:44:57.268006595Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\" id:\"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\" pid:3304 exited_at:{seconds:1757565897 nanos:219354525}" Sep 11 04:44:57.268261 containerd[1511]: time="2025-09-11T04:44:57.268068715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\" id:\"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\" pid:2799 exit_status:137 exited_at:{seconds:1757565897 nanos:261810226}" Sep 11 04:44:57.271256 containerd[1511]: time="2025-09-11T04:44:57.269914838Z" level=info msg="TearDown network for sandbox \"44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015\" successfully" Sep 11 04:44:57.271256 containerd[1511]: time="2025-09-11T04:44:57.269947318Z" level=info msg="StopPodSandbox for \"44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015\" returns successfully" Sep 11 04:44:57.270809 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015-shm.mount: Deactivated successfully. Sep 11 04:44:57.279599 containerd[1511]: time="2025-09-11T04:44:57.278891051Z" level=info msg="received exit event sandbox_id:\"44eefb91aecd07516c0e67496286d092ebee4d76aa5ac45da97071958787c015\" exit_status:137 exited_at:{seconds:1757565897 nanos:213040076}" Sep 11 04:44:57.295398 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd-rootfs.mount: Deactivated successfully. Sep 11 04:44:57.299439 containerd[1511]: time="2025-09-11T04:44:57.298745240Z" level=info msg="received exit event sandbox_id:\"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\" exit_status:137 exited_at:{seconds:1757565897 nanos:261810226}" Sep 11 04:44:57.299439 containerd[1511]: time="2025-09-11T04:44:57.299087040Z" level=info msg="shim disconnected" id=752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd namespace=k8s.io Sep 11 04:44:57.299439 containerd[1511]: time="2025-09-11T04:44:57.299146640Z" level=warning msg="cleaning up after shim disconnected" id=752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd namespace=k8s.io Sep 11 04:44:57.299439 containerd[1511]: time="2025-09-11T04:44:57.299173200Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 11 04:44:57.299439 containerd[1511]: time="2025-09-11T04:44:57.299360840Z" level=info msg="TearDown network for sandbox \"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\" successfully" Sep 11 04:44:57.299439 containerd[1511]: time="2025-09-11T04:44:57.299384480Z" level=info msg="StopPodSandbox for \"752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd\" returns successfully" Sep 11 04:44:57.384660 kubelet[2645]: I0911 04:44:57.384621 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-host-proc-sys-net\") pod \"1070ec9e-1959-4684-af5c-385736e842fd\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " Sep 11 04:44:57.384660 kubelet[2645]: I0911 04:44:57.384662 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-hostproc\") pod \"1070ec9e-1959-4684-af5c-385736e842fd\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " Sep 11 04:44:57.385487 kubelet[2645]: I0911 04:44:57.384677 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-host-proc-sys-kernel\") pod \"1070ec9e-1959-4684-af5c-385736e842fd\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " Sep 11 04:44:57.385487 kubelet[2645]: I0911 04:44:57.384699 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1070ec9e-1959-4684-af5c-385736e842fd-clustermesh-secrets\") pod \"1070ec9e-1959-4684-af5c-385736e842fd\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " Sep 11 04:44:57.385487 kubelet[2645]: I0911 04:44:57.384716 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-cni-path\") pod \"1070ec9e-1959-4684-af5c-385736e842fd\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " Sep 11 04:44:57.385487 kubelet[2645]: I0911 04:44:57.384732 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1070ec9e-1959-4684-af5c-385736e842fd-cilium-config-path\") pod \"1070ec9e-1959-4684-af5c-385736e842fd\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " Sep 11 04:44:57.385487 kubelet[2645]: I0911 04:44:57.384752 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m2vnh\" (UniqueName: \"kubernetes.io/projected/1070ec9e-1959-4684-af5c-385736e842fd-kube-api-access-m2vnh\") pod \"1070ec9e-1959-4684-af5c-385736e842fd\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " Sep 11 04:44:57.385487 kubelet[2645]: I0911 04:44:57.384768 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-lib-modules\") pod \"1070ec9e-1959-4684-af5c-385736e842fd\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " Sep 11 04:44:57.385609 kubelet[2645]: I0911 04:44:57.384785 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4j86d\" (UniqueName: \"kubernetes.io/projected/0889c8ab-4b7c-4c3f-83b7-063fa988d6af-kube-api-access-4j86d\") pod \"0889c8ab-4b7c-4c3f-83b7-063fa988d6af\" (UID: \"0889c8ab-4b7c-4c3f-83b7-063fa988d6af\") " Sep 11 04:44:57.385609 kubelet[2645]: I0911 04:44:57.384801 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-bpf-maps\") pod \"1070ec9e-1959-4684-af5c-385736e842fd\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " Sep 11 04:44:57.385609 kubelet[2645]: I0911 04:44:57.384818 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0889c8ab-4b7c-4c3f-83b7-063fa988d6af-cilium-config-path\") pod \"0889c8ab-4b7c-4c3f-83b7-063fa988d6af\" (UID: \"0889c8ab-4b7c-4c3f-83b7-063fa988d6af\") " Sep 11 04:44:57.385609 kubelet[2645]: I0911 04:44:57.384833 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-cilium-run\") pod \"1070ec9e-1959-4684-af5c-385736e842fd\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " Sep 11 04:44:57.385609 kubelet[2645]: I0911 04:44:57.384855 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-etc-cni-netd\") pod \"1070ec9e-1959-4684-af5c-385736e842fd\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " Sep 11 04:44:57.385609 kubelet[2645]: I0911 04:44:57.384872 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-cilium-cgroup\") pod \"1070ec9e-1959-4684-af5c-385736e842fd\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " Sep 11 04:44:57.385723 kubelet[2645]: I0911 04:44:57.384889 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1070ec9e-1959-4684-af5c-385736e842fd-hubble-tls\") pod \"1070ec9e-1959-4684-af5c-385736e842fd\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " Sep 11 04:44:57.385723 kubelet[2645]: I0911 04:44:57.384962 2645 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-xtables-lock\") pod \"1070ec9e-1959-4684-af5c-385736e842fd\" (UID: \"1070ec9e-1959-4684-af5c-385736e842fd\") " Sep 11 04:44:57.388333 kubelet[2645]: I0911 04:44:57.388065 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "1070ec9e-1959-4684-af5c-385736e842fd" (UID: "1070ec9e-1959-4684-af5c-385736e842fd"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 04:44:57.388333 kubelet[2645]: I0911 04:44:57.388078 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "1070ec9e-1959-4684-af5c-385736e842fd" (UID: "1070ec9e-1959-4684-af5c-385736e842fd"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 04:44:57.388333 kubelet[2645]: I0911 04:44:57.388066 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-cni-path" (OuterVolumeSpecName: "cni-path") pod "1070ec9e-1959-4684-af5c-385736e842fd" (UID: "1070ec9e-1959-4684-af5c-385736e842fd"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 04:44:57.388333 kubelet[2645]: I0911 04:44:57.388121 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-hostproc" (OuterVolumeSpecName: "hostproc") pod "1070ec9e-1959-4684-af5c-385736e842fd" (UID: "1070ec9e-1959-4684-af5c-385736e842fd"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 04:44:57.388333 kubelet[2645]: I0911 04:44:57.388142 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "1070ec9e-1959-4684-af5c-385736e842fd" (UID: "1070ec9e-1959-4684-af5c-385736e842fd"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 04:44:57.388624 kubelet[2645]: I0911 04:44:57.388600 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1070ec9e-1959-4684-af5c-385736e842fd" (UID: "1070ec9e-1959-4684-af5c-385736e842fd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 04:44:57.389012 kubelet[2645]: I0911 04:44:57.388984 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "1070ec9e-1959-4684-af5c-385736e842fd" (UID: "1070ec9e-1959-4684-af5c-385736e842fd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 04:44:57.390170 kubelet[2645]: I0911 04:44:57.389870 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1070ec9e-1959-4684-af5c-385736e842fd-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "1070ec9e-1959-4684-af5c-385736e842fd" (UID: "1070ec9e-1959-4684-af5c-385736e842fd"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 04:44:57.392724 kubelet[2645]: I0911 04:44:57.392641 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0889c8ab-4b7c-4c3f-83b7-063fa988d6af-kube-api-access-4j86d" (OuterVolumeSpecName: "kube-api-access-4j86d") pod "0889c8ab-4b7c-4c3f-83b7-063fa988d6af" (UID: "0889c8ab-4b7c-4c3f-83b7-063fa988d6af"). InnerVolumeSpecName "kube-api-access-4j86d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 04:44:57.393118 kubelet[2645]: I0911 04:44:57.392862 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "1070ec9e-1959-4684-af5c-385736e842fd" (UID: "1070ec9e-1959-4684-af5c-385736e842fd"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 04:44:57.393118 kubelet[2645]: I0911 04:44:57.392890 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "1070ec9e-1959-4684-af5c-385736e842fd" (UID: "1070ec9e-1959-4684-af5c-385736e842fd"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 04:44:57.393118 kubelet[2645]: I0911 04:44:57.392907 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "1070ec9e-1959-4684-af5c-385736e842fd" (UID: "1070ec9e-1959-4684-af5c-385736e842fd"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 11 04:44:57.393565 kubelet[2645]: I0911 04:44:57.393518 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0889c8ab-4b7c-4c3f-83b7-063fa988d6af-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0889c8ab-4b7c-4c3f-83b7-063fa988d6af" (UID: "0889c8ab-4b7c-4c3f-83b7-063fa988d6af"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 11 04:44:57.394980 kubelet[2645]: I0911 04:44:57.394950 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1070ec9e-1959-4684-af5c-385736e842fd-kube-api-access-m2vnh" (OuterVolumeSpecName: "kube-api-access-m2vnh") pod "1070ec9e-1959-4684-af5c-385736e842fd" (UID: "1070ec9e-1959-4684-af5c-385736e842fd"). InnerVolumeSpecName "kube-api-access-m2vnh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 04:44:57.395517 kubelet[2645]: I0911 04:44:57.395487 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1070ec9e-1959-4684-af5c-385736e842fd-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "1070ec9e-1959-4684-af5c-385736e842fd" (UID: "1070ec9e-1959-4684-af5c-385736e842fd"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 11 04:44:57.396028 kubelet[2645]: I0911 04:44:57.396002 2645 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1070ec9e-1959-4684-af5c-385736e842fd-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "1070ec9e-1959-4684-af5c-385736e842fd" (UID: "1070ec9e-1959-4684-af5c-385736e842fd"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 11 04:44:57.485828 kubelet[2645]: I0911 04:44:57.485786 2645 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.485828 kubelet[2645]: I0911 04:44:57.485832 2645 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.485974 kubelet[2645]: I0911 04:44:57.485851 2645 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.485974 kubelet[2645]: I0911 04:44:57.485862 2645 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/1070ec9e-1959-4684-af5c-385736e842fd-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.485974 kubelet[2645]: I0911 04:44:57.485871 2645 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.485974 kubelet[2645]: I0911 04:44:57.485879 2645 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.485974 kubelet[2645]: I0911 04:44:57.485904 2645 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.485974 kubelet[2645]: I0911 04:44:57.485918 2645 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.485974 kubelet[2645]: I0911 04:44:57.485926 2645 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/1070ec9e-1959-4684-af5c-385736e842fd-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.485974 kubelet[2645]: I0911 04:44:57.485938 2645 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.486143 kubelet[2645]: I0911 04:44:57.485946 2645 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/1070ec9e-1959-4684-af5c-385736e842fd-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.486143 kubelet[2645]: I0911 04:44:57.485954 2645 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4j86d\" (UniqueName: \"kubernetes.io/projected/0889c8ab-4b7c-4c3f-83b7-063fa988d6af-kube-api-access-4j86d\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.486143 kubelet[2645]: I0911 04:44:57.485962 2645 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m2vnh\" (UniqueName: \"kubernetes.io/projected/1070ec9e-1959-4684-af5c-385736e842fd-kube-api-access-m2vnh\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.486143 kubelet[2645]: I0911 04:44:57.485985 2645 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.486143 kubelet[2645]: I0911 04:44:57.485995 2645 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/1070ec9e-1959-4684-af5c-385736e842fd-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.486143 kubelet[2645]: I0911 04:44:57.486004 2645 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0889c8ab-4b7c-4c3f-83b7-063fa988d6af-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 11 04:44:57.886409 kubelet[2645]: E0911 04:44:57.886337 2645 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 11 04:44:58.030185 kubelet[2645]: I0911 04:44:58.030084 2645 scope.go:117] "RemoveContainer" containerID="3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843" Sep 11 04:44:58.033554 containerd[1511]: time="2025-09-11T04:44:58.032786349Z" level=info msg="RemoveContainer for \"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\"" Sep 11 04:44:58.036109 containerd[1511]: time="2025-09-11T04:44:58.036070875Z" level=info msg="RemoveContainer for \"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\" returns successfully" Sep 11 04:44:58.038337 kubelet[2645]: I0911 04:44:58.038312 2645 scope.go:117] "RemoveContainer" containerID="3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843" Sep 11 04:44:58.039484 systemd[1]: Removed slice kubepods-besteffort-pod0889c8ab_4b7c_4c3f_83b7_063fa988d6af.slice - libcontainer container kubepods-besteffort-pod0889c8ab_4b7c_4c3f_83b7_063fa988d6af.slice. Sep 11 04:44:58.040195 containerd[1511]: time="2025-09-11T04:44:58.040156882Z" level=error msg="ContainerStatus for \"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\": not found" Sep 11 04:44:58.040884 kubelet[2645]: E0911 04:44:58.040783 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\": not found" containerID="3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843" Sep 11 04:44:58.044670 systemd[1]: Removed slice kubepods-burstable-pod1070ec9e_1959_4684_af5c_385736e842fd.slice - libcontainer container kubepods-burstable-pod1070ec9e_1959_4684_af5c_385736e842fd.slice. Sep 11 04:44:58.044823 systemd[1]: kubepods-burstable-pod1070ec9e_1959_4684_af5c_385736e842fd.slice: Consumed 6.199s CPU time, 123.3M memory peak, 148K read from disk, 14.3M written to disk. Sep 11 04:44:58.048602 kubelet[2645]: I0911 04:44:58.048492 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843"} err="failed to get container status \"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b338049b3d725b6a2e0b4e6b35aa43915246272ef030257768c0a9483f08843\": not found" Sep 11 04:44:58.048602 kubelet[2645]: I0911 04:44:58.048600 2645 scope.go:117] "RemoveContainer" containerID="6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7" Sep 11 04:44:58.051082 containerd[1511]: time="2025-09-11T04:44:58.051036102Z" level=info msg="RemoveContainer for \"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\"" Sep 11 04:44:58.055369 containerd[1511]: time="2025-09-11T04:44:58.055336110Z" level=info msg="RemoveContainer for \"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\" returns successfully" Sep 11 04:44:58.056114 kubelet[2645]: I0911 04:44:58.055511 2645 scope.go:117] "RemoveContainer" containerID="bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a" Sep 11 04:44:58.057137 containerd[1511]: time="2025-09-11T04:44:58.057076473Z" level=info msg="RemoveContainer for \"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a\"" Sep 11 04:44:58.064550 containerd[1511]: time="2025-09-11T04:44:58.064514847Z" level=info msg="RemoveContainer for \"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a\" returns successfully" Sep 11 04:44:58.064713 kubelet[2645]: I0911 04:44:58.064691 2645 scope.go:117] "RemoveContainer" containerID="1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8" Sep 11 04:44:58.067302 containerd[1511]: time="2025-09-11T04:44:58.067276492Z" level=info msg="RemoveContainer for \"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8\"" Sep 11 04:44:58.074247 containerd[1511]: time="2025-09-11T04:44:58.073347903Z" level=info msg="RemoveContainer for \"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8\" returns successfully" Sep 11 04:44:58.074419 kubelet[2645]: I0911 04:44:58.074373 2645 scope.go:117] "RemoveContainer" containerID="44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd" Sep 11 04:44:58.075893 containerd[1511]: time="2025-09-11T04:44:58.075864228Z" level=info msg="RemoveContainer for \"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd\"" Sep 11 04:44:58.078632 containerd[1511]: time="2025-09-11T04:44:58.078608273Z" level=info msg="RemoveContainer for \"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd\" returns successfully" Sep 11 04:44:58.078794 kubelet[2645]: I0911 04:44:58.078750 2645 scope.go:117] "RemoveContainer" containerID="720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8" Sep 11 04:44:58.080110 containerd[1511]: time="2025-09-11T04:44:58.080085556Z" level=info msg="RemoveContainer for \"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8\"" Sep 11 04:44:58.082645 containerd[1511]: time="2025-09-11T04:44:58.082622480Z" level=info msg="RemoveContainer for \"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8\" returns successfully" Sep 11 04:44:58.083026 kubelet[2645]: I0911 04:44:58.082784 2645 scope.go:117] "RemoveContainer" containerID="6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7" Sep 11 04:44:58.083334 containerd[1511]: time="2025-09-11T04:44:58.083286362Z" level=error msg="ContainerStatus for \"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\": not found" Sep 11 04:44:58.083535 kubelet[2645]: E0911 04:44:58.083511 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\": not found" containerID="6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7" Sep 11 04:44:58.083582 kubelet[2645]: I0911 04:44:58.083540 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7"} err="failed to get container status \"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e2420a3e81f4705a97d696bce9db5b336a515a5b371fbce093eb145b95f20b7\": not found" Sep 11 04:44:58.083582 kubelet[2645]: I0911 04:44:58.083562 2645 scope.go:117] "RemoveContainer" containerID="bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a" Sep 11 04:44:58.083722 containerd[1511]: time="2025-09-11T04:44:58.083692962Z" level=error msg="ContainerStatus for \"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a\": not found" Sep 11 04:44:58.083823 kubelet[2645]: E0911 04:44:58.083802 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a\": not found" containerID="bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a" Sep 11 04:44:58.083879 kubelet[2645]: I0911 04:44:58.083854 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a"} err="failed to get container status \"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb9ed1859d14460252cf844f2818ab643b7668d9879928eb9ae33e872e25936a\": not found" Sep 11 04:44:58.083879 kubelet[2645]: I0911 04:44:58.083873 2645 scope.go:117] "RemoveContainer" containerID="1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8" Sep 11 04:44:58.084076 containerd[1511]: time="2025-09-11T04:44:58.084043043Z" level=error msg="ContainerStatus for \"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8\": not found" Sep 11 04:44:58.084158 kubelet[2645]: E0911 04:44:58.084139 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8\": not found" containerID="1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8" Sep 11 04:44:58.084186 kubelet[2645]: I0911 04:44:58.084160 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8"} err="failed to get container status \"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"1845a00c610493adab61911203c22865b41e76b338f129fe44fe2ec9833f28a8\": not found" Sep 11 04:44:58.084186 kubelet[2645]: I0911 04:44:58.084177 2645 scope.go:117] "RemoveContainer" containerID="44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd" Sep 11 04:44:58.084448 containerd[1511]: time="2025-09-11T04:44:58.084338604Z" level=error msg="ContainerStatus for \"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd\": not found" Sep 11 04:44:58.084479 kubelet[2645]: E0911 04:44:58.084445 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd\": not found" containerID="44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd" Sep 11 04:44:58.084479 kubelet[2645]: I0911 04:44:58.084463 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd"} err="failed to get container status \"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"44af99f13de2b6db35d39af45b8d8839ca0f78eccf05ca649b5c7430b16781dd\": not found" Sep 11 04:44:58.084479 kubelet[2645]: I0911 04:44:58.084475 2645 scope.go:117] "RemoveContainer" containerID="720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8" Sep 11 04:44:58.084642 containerd[1511]: time="2025-09-11T04:44:58.084599204Z" level=error msg="ContainerStatus for \"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8\": not found" Sep 11 04:44:58.084728 kubelet[2645]: E0911 04:44:58.084708 2645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8\": not found" containerID="720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8" Sep 11 04:44:58.084769 kubelet[2645]: I0911 04:44:58.084735 2645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8"} err="failed to get container status \"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8\": rpc error: code = NotFound desc = an error occurred when try to find container \"720b3bd7f68ce29e049be8de97d328eadcdcd304d85049436269e70e2eb674c8\": not found" Sep 11 04:44:58.180657 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-752f0163fdeb5038a2600073fc6742f85858938cb4747f8b8a1bf1dec57d20cd-shm.mount: Deactivated successfully. Sep 11 04:44:58.180770 systemd[1]: var-lib-kubelet-pods-0889c8ab\x2d4b7c\x2d4c3f\x2d83b7\x2d063fa988d6af-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4j86d.mount: Deactivated successfully. Sep 11 04:44:58.180824 systemd[1]: var-lib-kubelet-pods-1070ec9e\x2d1959\x2d4684\x2daf5c\x2d385736e842fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm2vnh.mount: Deactivated successfully. Sep 11 04:44:58.180885 systemd[1]: var-lib-kubelet-pods-1070ec9e\x2d1959\x2d4684\x2daf5c\x2d385736e842fd-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 11 04:44:58.180936 systemd[1]: var-lib-kubelet-pods-1070ec9e\x2d1959\x2d4684\x2daf5c\x2d385736e842fd-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 11 04:44:58.834406 kubelet[2645]: I0911 04:44:58.834358 2645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0889c8ab-4b7c-4c3f-83b7-063fa988d6af" path="/var/lib/kubelet/pods/0889c8ab-4b7c-4c3f-83b7-063fa988d6af/volumes" Sep 11 04:44:58.834763 kubelet[2645]: I0911 04:44:58.834740 2645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1070ec9e-1959-4684-af5c-385736e842fd" path="/var/lib/kubelet/pods/1070ec9e-1959-4684-af5c-385736e842fd/volumes" Sep 11 04:44:59.086300 sshd[4256]: Connection closed by 10.0.0.1 port 39982 Sep 11 04:44:59.086917 sshd-session[4253]: pam_unix(sshd:session): session closed for user core Sep 11 04:44:59.098397 systemd[1]: sshd@22-10.0.0.77:22-10.0.0.1:39982.service: Deactivated successfully. Sep 11 04:44:59.099964 systemd[1]: session-23.scope: Deactivated successfully. Sep 11 04:44:59.100142 systemd[1]: session-23.scope: Consumed 1.871s CPU time, 26.2M memory peak. Sep 11 04:44:59.100637 systemd-logind[1476]: Session 23 logged out. Waiting for processes to exit. Sep 11 04:44:59.102846 systemd[1]: Started sshd@23-10.0.0.77:22-10.0.0.1:39990.service - OpenSSH per-connection server daemon (10.0.0.1:39990). Sep 11 04:44:59.103499 systemd-logind[1476]: Removed session 23. Sep 11 04:44:59.154707 sshd[4413]: Accepted publickey for core from 10.0.0.1 port 39990 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:44:59.155872 sshd-session[4413]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:44:59.159971 systemd-logind[1476]: New session 24 of user core. Sep 11 04:44:59.167353 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 11 04:45:01.711158 sshd[4416]: Connection closed by 10.0.0.1 port 39990 Sep 11 04:45:01.712424 sshd-session[4413]: pam_unix(sshd:session): session closed for user core Sep 11 04:45:01.722664 systemd[1]: sshd@23-10.0.0.77:22-10.0.0.1:39990.service: Deactivated successfully. Sep 11 04:45:01.725443 systemd[1]: session-24.scope: Deactivated successfully. Sep 11 04:45:01.727273 systemd[1]: session-24.scope: Consumed 2.447s CPU time, 28.3M memory peak. Sep 11 04:45:01.728731 systemd-logind[1476]: Session 24 logged out. Waiting for processes to exit. Sep 11 04:45:01.730901 kubelet[2645]: I0911 04:45:01.730852 2645 memory_manager.go:355] "RemoveStaleState removing state" podUID="0889c8ab-4b7c-4c3f-83b7-063fa988d6af" containerName="cilium-operator" Sep 11 04:45:01.730901 kubelet[2645]: I0911 04:45:01.730884 2645 memory_manager.go:355] "RemoveStaleState removing state" podUID="1070ec9e-1959-4684-af5c-385736e842fd" containerName="cilium-agent" Sep 11 04:45:01.734493 systemd[1]: Started sshd@24-10.0.0.77:22-10.0.0.1:47244.service - OpenSSH per-connection server daemon (10.0.0.1:47244). Sep 11 04:45:01.735739 systemd-logind[1476]: Removed session 24. Sep 11 04:45:01.747466 systemd[1]: Created slice kubepods-burstable-pod70d29c96_08e9_4c4b_86c5_4b7d6db6ed8a.slice - libcontainer container kubepods-burstable-pod70d29c96_08e9_4c4b_86c5_4b7d6db6ed8a.slice. Sep 11 04:45:01.801000 sshd[4429]: Accepted publickey for core from 10.0.0.1 port 47244 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:45:01.801770 sshd-session[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:45:01.805811 systemd-logind[1476]: New session 25 of user core. Sep 11 04:45:01.807499 kubelet[2645]: I0911 04:45:01.807424 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a-cilium-cgroup\") pod \"cilium-d4tcr\" (UID: \"70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a\") " pod="kube-system/cilium-d4tcr" Sep 11 04:45:01.807499 kubelet[2645]: I0911 04:45:01.807463 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a-cilium-config-path\") pod \"cilium-d4tcr\" (UID: \"70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a\") " pod="kube-system/cilium-d4tcr" Sep 11 04:45:01.807499 kubelet[2645]: I0911 04:45:01.807485 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a-hostproc\") pod \"cilium-d4tcr\" (UID: \"70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a\") " pod="kube-system/cilium-d4tcr" Sep 11 04:45:01.807667 kubelet[2645]: I0911 04:45:01.807652 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a-host-proc-sys-kernel\") pod \"cilium-d4tcr\" (UID: \"70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a\") " pod="kube-system/cilium-d4tcr" Sep 11 04:45:01.807759 kubelet[2645]: I0911 04:45:01.807745 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a-clustermesh-secrets\") pod \"cilium-d4tcr\" (UID: \"70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a\") " pod="kube-system/cilium-d4tcr" Sep 11 04:45:01.807873 kubelet[2645]: I0911 04:45:01.807860 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pp8x\" (UniqueName: \"kubernetes.io/projected/70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a-kube-api-access-7pp8x\") pod \"cilium-d4tcr\" (UID: \"70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a\") " pod="kube-system/cilium-d4tcr" Sep 11 04:45:01.808029 kubelet[2645]: I0911 04:45:01.807975 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a-cilium-run\") pod \"cilium-d4tcr\" (UID: \"70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a\") " pod="kube-system/cilium-d4tcr" Sep 11 04:45:01.808029 kubelet[2645]: I0911 04:45:01.807999 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a-cni-path\") pod \"cilium-d4tcr\" (UID: \"70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a\") " pod="kube-system/cilium-d4tcr" Sep 11 04:45:01.808029 kubelet[2645]: I0911 04:45:01.808013 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a-hubble-tls\") pod \"cilium-d4tcr\" (UID: \"70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a\") " pod="kube-system/cilium-d4tcr" Sep 11 04:45:01.808194 kubelet[2645]: I0911 04:45:01.808150 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a-lib-modules\") pod \"cilium-d4tcr\" (UID: \"70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a\") " pod="kube-system/cilium-d4tcr" Sep 11 04:45:01.808194 kubelet[2645]: I0911 04:45:01.808175 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a-xtables-lock\") pod \"cilium-d4tcr\" (UID: \"70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a\") " pod="kube-system/cilium-d4tcr" Sep 11 04:45:01.808299 kubelet[2645]: I0911 04:45:01.808285 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a-cilium-ipsec-secrets\") pod \"cilium-d4tcr\" (UID: \"70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a\") " pod="kube-system/cilium-d4tcr" Sep 11 04:45:01.808410 kubelet[2645]: I0911 04:45:01.808398 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a-bpf-maps\") pod \"cilium-d4tcr\" (UID: \"70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a\") " pod="kube-system/cilium-d4tcr" Sep 11 04:45:01.808548 kubelet[2645]: I0911 04:45:01.808492 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a-etc-cni-netd\") pod \"cilium-d4tcr\" (UID: \"70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a\") " pod="kube-system/cilium-d4tcr" Sep 11 04:45:01.808548 kubelet[2645]: I0911 04:45:01.808512 2645 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a-host-proc-sys-net\") pod \"cilium-d4tcr\" (UID: \"70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a\") " pod="kube-system/cilium-d4tcr" Sep 11 04:45:01.816383 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 11 04:45:01.865203 sshd[4432]: Connection closed by 10.0.0.1 port 47244 Sep 11 04:45:01.865695 sshd-session[4429]: pam_unix(sshd:session): session closed for user core Sep 11 04:45:01.873190 systemd[1]: sshd@24-10.0.0.77:22-10.0.0.1:47244.service: Deactivated successfully. Sep 11 04:45:01.875147 systemd[1]: session-25.scope: Deactivated successfully. Sep 11 04:45:01.875852 systemd-logind[1476]: Session 25 logged out. Waiting for processes to exit. Sep 11 04:45:01.878091 systemd[1]: Started sshd@25-10.0.0.77:22-10.0.0.1:47246.service - OpenSSH per-connection server daemon (10.0.0.1:47246). Sep 11 04:45:01.878732 systemd-logind[1476]: Removed session 25. Sep 11 04:45:01.933442 sshd[4439]: Accepted publickey for core from 10.0.0.1 port 47246 ssh2: RSA SHA256:r93A7kxmah4AojgP6+qDUQjMhUo7EzBa1eMNVubJu6A Sep 11 04:45:01.934741 sshd-session[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 11 04:45:01.940446 systemd-logind[1476]: New session 26 of user core. Sep 11 04:45:01.944353 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 11 04:45:02.052153 kubelet[2645]: E0911 04:45:02.051509 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:45:02.052271 containerd[1511]: time="2025-09-11T04:45:02.051995799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d4tcr,Uid:70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a,Namespace:kube-system,Attempt:0,}" Sep 11 04:45:02.068173 containerd[1511]: time="2025-09-11T04:45:02.068131373Z" level=info msg="connecting to shim 002fe894fe65cf7bc2c42be76e500760235397118c80c9f274b3a4e0fb6fd00a" address="unix:///run/containerd/s/4d410239c54d1bfdb1204a1ede2904810d11f2e0a86adc14f10a901ed1e437df" namespace=k8s.io protocol=ttrpc version=3 Sep 11 04:45:02.091421 systemd[1]: Started cri-containerd-002fe894fe65cf7bc2c42be76e500760235397118c80c9f274b3a4e0fb6fd00a.scope - libcontainer container 002fe894fe65cf7bc2c42be76e500760235397118c80c9f274b3a4e0fb6fd00a. Sep 11 04:45:02.112418 containerd[1511]: time="2025-09-11T04:45:02.112378880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d4tcr,Uid:70d29c96-08e9-4c4b-86c5-4b7d6db6ed8a,Namespace:kube-system,Attempt:0,} returns sandbox id \"002fe894fe65cf7bc2c42be76e500760235397118c80c9f274b3a4e0fb6fd00a\"" Sep 11 04:45:02.113061 kubelet[2645]: E0911 04:45:02.113035 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:45:02.115409 containerd[1511]: time="2025-09-11T04:45:02.115328530Z" level=info msg="CreateContainer within sandbox \"002fe894fe65cf7bc2c42be76e500760235397118c80c9f274b3a4e0fb6fd00a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 11 04:45:02.122792 containerd[1511]: time="2025-09-11T04:45:02.122191513Z" level=info msg="Container 4503c6281635cf915d3e254f2d42555db150257f0bb5a071a38ec78638ca9874: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:45:02.127403 containerd[1511]: time="2025-09-11T04:45:02.127365010Z" level=info msg="CreateContainer within sandbox \"002fe894fe65cf7bc2c42be76e500760235397118c80c9f274b3a4e0fb6fd00a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4503c6281635cf915d3e254f2d42555db150257f0bb5a071a38ec78638ca9874\"" Sep 11 04:45:02.128277 containerd[1511]: time="2025-09-11T04:45:02.128114333Z" level=info msg="StartContainer for \"4503c6281635cf915d3e254f2d42555db150257f0bb5a071a38ec78638ca9874\"" Sep 11 04:45:02.129063 containerd[1511]: time="2025-09-11T04:45:02.129020256Z" level=info msg="connecting to shim 4503c6281635cf915d3e254f2d42555db150257f0bb5a071a38ec78638ca9874" address="unix:///run/containerd/s/4d410239c54d1bfdb1204a1ede2904810d11f2e0a86adc14f10a901ed1e437df" protocol=ttrpc version=3 Sep 11 04:45:02.148394 systemd[1]: Started cri-containerd-4503c6281635cf915d3e254f2d42555db150257f0bb5a071a38ec78638ca9874.scope - libcontainer container 4503c6281635cf915d3e254f2d42555db150257f0bb5a071a38ec78638ca9874. Sep 11 04:45:02.174027 containerd[1511]: time="2025-09-11T04:45:02.173989566Z" level=info msg="StartContainer for \"4503c6281635cf915d3e254f2d42555db150257f0bb5a071a38ec78638ca9874\" returns successfully" Sep 11 04:45:02.181528 systemd[1]: cri-containerd-4503c6281635cf915d3e254f2d42555db150257f0bb5a071a38ec78638ca9874.scope: Deactivated successfully. Sep 11 04:45:02.183518 containerd[1511]: time="2025-09-11T04:45:02.183376597Z" level=info msg="received exit event container_id:\"4503c6281635cf915d3e254f2d42555db150257f0bb5a071a38ec78638ca9874\" id:\"4503c6281635cf915d3e254f2d42555db150257f0bb5a071a38ec78638ca9874\" pid:4510 exited_at:{seconds:1757565902 nanos:182941475}" Sep 11 04:45:02.183518 containerd[1511]: time="2025-09-11T04:45:02.183486397Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4503c6281635cf915d3e254f2d42555db150257f0bb5a071a38ec78638ca9874\" id:\"4503c6281635cf915d3e254f2d42555db150257f0bb5a071a38ec78638ca9874\" pid:4510 exited_at:{seconds:1757565902 nanos:182941475}" Sep 11 04:45:02.887816 kubelet[2645]: E0911 04:45:02.887780 2645 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 11 04:45:03.049923 kubelet[2645]: E0911 04:45:03.049674 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:45:03.051705 containerd[1511]: time="2025-09-11T04:45:03.051664665Z" level=info msg="CreateContainer within sandbox \"002fe894fe65cf7bc2c42be76e500760235397118c80c9f274b3a4e0fb6fd00a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 11 04:45:03.062395 containerd[1511]: time="2025-09-11T04:45:03.062356385Z" level=info msg="Container 7e153cc786eee31f25731996cefa09fda0e905c965c07c19ce29edb9b05395e7: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:45:03.073034 containerd[1511]: time="2025-09-11T04:45:03.072971264Z" level=info msg="CreateContainer within sandbox \"002fe894fe65cf7bc2c42be76e500760235397118c80c9f274b3a4e0fb6fd00a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7e153cc786eee31f25731996cefa09fda0e905c965c07c19ce29edb9b05395e7\"" Sep 11 04:45:03.076486 containerd[1511]: time="2025-09-11T04:45:03.076455117Z" level=info msg="StartContainer for \"7e153cc786eee31f25731996cefa09fda0e905c965c07c19ce29edb9b05395e7\"" Sep 11 04:45:03.077782 containerd[1511]: time="2025-09-11T04:45:03.077753681Z" level=info msg="connecting to shim 7e153cc786eee31f25731996cefa09fda0e905c965c07c19ce29edb9b05395e7" address="unix:///run/containerd/s/4d410239c54d1bfdb1204a1ede2904810d11f2e0a86adc14f10a901ed1e437df" protocol=ttrpc version=3 Sep 11 04:45:03.098381 systemd[1]: Started cri-containerd-7e153cc786eee31f25731996cefa09fda0e905c965c07c19ce29edb9b05395e7.scope - libcontainer container 7e153cc786eee31f25731996cefa09fda0e905c965c07c19ce29edb9b05395e7. Sep 11 04:45:03.120598 containerd[1511]: time="2025-09-11T04:45:03.120562479Z" level=info msg="StartContainer for \"7e153cc786eee31f25731996cefa09fda0e905c965c07c19ce29edb9b05395e7\" returns successfully" Sep 11 04:45:03.126404 systemd[1]: cri-containerd-7e153cc786eee31f25731996cefa09fda0e905c965c07c19ce29edb9b05395e7.scope: Deactivated successfully. Sep 11 04:45:03.129833 containerd[1511]: time="2025-09-11T04:45:03.129727752Z" level=info msg="received exit event container_id:\"7e153cc786eee31f25731996cefa09fda0e905c965c07c19ce29edb9b05395e7\" id:\"7e153cc786eee31f25731996cefa09fda0e905c965c07c19ce29edb9b05395e7\" pid:4555 exited_at:{seconds:1757565903 nanos:129564952}" Sep 11 04:45:03.129931 containerd[1511]: time="2025-09-11T04:45:03.129825233Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7e153cc786eee31f25731996cefa09fda0e905c965c07c19ce29edb9b05395e7\" id:\"7e153cc786eee31f25731996cefa09fda0e905c965c07c19ce29edb9b05395e7\" pid:4555 exited_at:{seconds:1757565903 nanos:129564952}" Sep 11 04:45:03.912931 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e153cc786eee31f25731996cefa09fda0e905c965c07c19ce29edb9b05395e7-rootfs.mount: Deactivated successfully. Sep 11 04:45:04.053878 kubelet[2645]: E0911 04:45:04.053829 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:45:04.056439 containerd[1511]: time="2025-09-11T04:45:04.056390335Z" level=info msg="CreateContainer within sandbox \"002fe894fe65cf7bc2c42be76e500760235397118c80c9f274b3a4e0fb6fd00a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 11 04:45:04.232148 containerd[1511]: time="2025-09-11T04:45:04.231054074Z" level=info msg="Container 50313c1e998d712180d011e19d8d9d02b9253517c7974ba4a9cc83985e54c1bf: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:45:04.233418 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2860200143.mount: Deactivated successfully. Sep 11 04:45:04.237917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2835329438.mount: Deactivated successfully. Sep 11 04:45:04.244253 containerd[1511]: time="2025-09-11T04:45:04.244186967Z" level=info msg="CreateContainer within sandbox \"002fe894fe65cf7bc2c42be76e500760235397118c80c9f274b3a4e0fb6fd00a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"50313c1e998d712180d011e19d8d9d02b9253517c7974ba4a9cc83985e54c1bf\"" Sep 11 04:45:04.245539 containerd[1511]: time="2025-09-11T04:45:04.245382092Z" level=info msg="StartContainer for \"50313c1e998d712180d011e19d8d9d02b9253517c7974ba4a9cc83985e54c1bf\"" Sep 11 04:45:04.247436 containerd[1511]: time="2025-09-11T04:45:04.247407060Z" level=info msg="connecting to shim 50313c1e998d712180d011e19d8d9d02b9253517c7974ba4a9cc83985e54c1bf" address="unix:///run/containerd/s/4d410239c54d1bfdb1204a1ede2904810d11f2e0a86adc14f10a901ed1e437df" protocol=ttrpc version=3 Sep 11 04:45:04.274417 systemd[1]: Started cri-containerd-50313c1e998d712180d011e19d8d9d02b9253517c7974ba4a9cc83985e54c1bf.scope - libcontainer container 50313c1e998d712180d011e19d8d9d02b9253517c7974ba4a9cc83985e54c1bf. Sep 11 04:45:04.311718 systemd[1]: cri-containerd-50313c1e998d712180d011e19d8d9d02b9253517c7974ba4a9cc83985e54c1bf.scope: Deactivated successfully. Sep 11 04:45:04.312661 containerd[1511]: time="2025-09-11T04:45:04.312630281Z" level=info msg="received exit event container_id:\"50313c1e998d712180d011e19d8d9d02b9253517c7974ba4a9cc83985e54c1bf\" id:\"50313c1e998d712180d011e19d8d9d02b9253517c7974ba4a9cc83985e54c1bf\" pid:4600 exited_at:{seconds:1757565904 nanos:312459040}" Sep 11 04:45:04.312920 containerd[1511]: time="2025-09-11T04:45:04.312663001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50313c1e998d712180d011e19d8d9d02b9253517c7974ba4a9cc83985e54c1bf\" id:\"50313c1e998d712180d011e19d8d9d02b9253517c7974ba4a9cc83985e54c1bf\" pid:4600 exited_at:{seconds:1757565904 nanos:312459040}" Sep 11 04:45:04.316565 containerd[1511]: time="2025-09-11T04:45:04.316494297Z" level=info msg="StartContainer for \"50313c1e998d712180d011e19d8d9d02b9253517c7974ba4a9cc83985e54c1bf\" returns successfully" Sep 11 04:45:04.913030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50313c1e998d712180d011e19d8d9d02b9253517c7974ba4a9cc83985e54c1bf-rootfs.mount: Deactivated successfully. Sep 11 04:45:04.946649 kubelet[2645]: I0911 04:45:04.946318 2645 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-11T04:45:04Z","lastTransitionTime":"2025-09-11T04:45:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 11 04:45:05.058533 kubelet[2645]: E0911 04:45:05.058507 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:45:05.063357 containerd[1511]: time="2025-09-11T04:45:05.063144188Z" level=info msg="CreateContainer within sandbox \"002fe894fe65cf7bc2c42be76e500760235397118c80c9f274b3a4e0fb6fd00a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 11 04:45:05.075254 containerd[1511]: time="2025-09-11T04:45:05.072638629Z" level=info msg="Container b81c46d0b7615e4e7c146f5670c1c1596af547c39cd38bab28e2611de389a742: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:45:05.086001 containerd[1511]: time="2025-09-11T04:45:05.085960046Z" level=info msg="CreateContainer within sandbox \"002fe894fe65cf7bc2c42be76e500760235397118c80c9f274b3a4e0fb6fd00a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b81c46d0b7615e4e7c146f5670c1c1596af547c39cd38bab28e2611de389a742\"" Sep 11 04:45:05.086657 containerd[1511]: time="2025-09-11T04:45:05.086635729Z" level=info msg="StartContainer for \"b81c46d0b7615e4e7c146f5670c1c1596af547c39cd38bab28e2611de389a742\"" Sep 11 04:45:05.087696 containerd[1511]: time="2025-09-11T04:45:05.087668374Z" level=info msg="connecting to shim b81c46d0b7615e4e7c146f5670c1c1596af547c39cd38bab28e2611de389a742" address="unix:///run/containerd/s/4d410239c54d1bfdb1204a1ede2904810d11f2e0a86adc14f10a901ed1e437df" protocol=ttrpc version=3 Sep 11 04:45:05.117392 systemd[1]: Started cri-containerd-b81c46d0b7615e4e7c146f5670c1c1596af547c39cd38bab28e2611de389a742.scope - libcontainer container b81c46d0b7615e4e7c146f5670c1c1596af547c39cd38bab28e2611de389a742. Sep 11 04:45:05.140296 systemd[1]: cri-containerd-b81c46d0b7615e4e7c146f5670c1c1596af547c39cd38bab28e2611de389a742.scope: Deactivated successfully. Sep 11 04:45:05.141436 containerd[1511]: time="2025-09-11T04:45:05.141401886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b81c46d0b7615e4e7c146f5670c1c1596af547c39cd38bab28e2611de389a742\" id:\"b81c46d0b7615e4e7c146f5670c1c1596af547c39cd38bab28e2611de389a742\" pid:4638 exited_at:{seconds:1757565905 nanos:140432482}" Sep 11 04:45:05.142389 containerd[1511]: time="2025-09-11T04:45:05.142362171Z" level=info msg="received exit event container_id:\"b81c46d0b7615e4e7c146f5670c1c1596af547c39cd38bab28e2611de389a742\" id:\"b81c46d0b7615e4e7c146f5670c1c1596af547c39cd38bab28e2611de389a742\" pid:4638 exited_at:{seconds:1757565905 nanos:140432482}" Sep 11 04:45:05.143499 containerd[1511]: time="2025-09-11T04:45:05.143475615Z" level=info msg="StartContainer for \"b81c46d0b7615e4e7c146f5670c1c1596af547c39cd38bab28e2611de389a742\" returns successfully" Sep 11 04:45:05.160024 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b81c46d0b7615e4e7c146f5670c1c1596af547c39cd38bab28e2611de389a742-rootfs.mount: Deactivated successfully. Sep 11 04:45:05.832194 kubelet[2645]: E0911 04:45:05.832155 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:45:06.063422 kubelet[2645]: E0911 04:45:06.063376 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:45:06.065664 containerd[1511]: time="2025-09-11T04:45:06.065246185Z" level=info msg="CreateContainer within sandbox \"002fe894fe65cf7bc2c42be76e500760235397118c80c9f274b3a4e0fb6fd00a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 11 04:45:06.074625 containerd[1511]: time="2025-09-11T04:45:06.074589828Z" level=info msg="Container eb3c588af442085dbf9d804b7b1ef64e4eec2fe4506066176633a27fec2b1501: CDI devices from CRI Config.CDIDevices: []" Sep 11 04:45:06.078806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount808373508.mount: Deactivated successfully. Sep 11 04:45:06.080766 containerd[1511]: time="2025-09-11T04:45:06.080730936Z" level=info msg="CreateContainer within sandbox \"002fe894fe65cf7bc2c42be76e500760235397118c80c9f274b3a4e0fb6fd00a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"eb3c588af442085dbf9d804b7b1ef64e4eec2fe4506066176633a27fec2b1501\"" Sep 11 04:45:06.081541 containerd[1511]: time="2025-09-11T04:45:06.081488820Z" level=info msg="StartContainer for \"eb3c588af442085dbf9d804b7b1ef64e4eec2fe4506066176633a27fec2b1501\"" Sep 11 04:45:06.083038 containerd[1511]: time="2025-09-11T04:45:06.082781306Z" level=info msg="connecting to shim eb3c588af442085dbf9d804b7b1ef64e4eec2fe4506066176633a27fec2b1501" address="unix:///run/containerd/s/4d410239c54d1bfdb1204a1ede2904810d11f2e0a86adc14f10a901ed1e437df" protocol=ttrpc version=3 Sep 11 04:45:06.101383 systemd[1]: Started cri-containerd-eb3c588af442085dbf9d804b7b1ef64e4eec2fe4506066176633a27fec2b1501.scope - libcontainer container eb3c588af442085dbf9d804b7b1ef64e4eec2fe4506066176633a27fec2b1501. Sep 11 04:45:06.127268 containerd[1511]: time="2025-09-11T04:45:06.127193992Z" level=info msg="StartContainer for \"eb3c588af442085dbf9d804b7b1ef64e4eec2fe4506066176633a27fec2b1501\" returns successfully" Sep 11 04:45:06.181992 containerd[1511]: time="2025-09-11T04:45:06.181828686Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb3c588af442085dbf9d804b7b1ef64e4eec2fe4506066176633a27fec2b1501\" id:\"fecc35720ac5b37a50533fdee8a5cad8c6104c634fc3266938e4b511b8ccc0c6\" pid:4703 exited_at:{seconds:1757565906 nanos:181558284}" Sep 11 04:45:06.387260 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 11 04:45:07.069279 kubelet[2645]: E0911 04:45:07.069246 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:45:07.086608 kubelet[2645]: I0911 04:45:07.086383 2645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d4tcr" podStartSLOduration=6.086368388 podStartE2EDuration="6.086368388s" podCreationTimestamp="2025-09-11 04:45:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-11 04:45:07.085686145 +0000 UTC m=+84.331579311" watchObservedRunningTime="2025-09-11 04:45:07.086368388 +0000 UTC m=+84.332261554" Sep 11 04:45:08.071437 kubelet[2645]: E0911 04:45:08.071399 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:45:08.427045 containerd[1511]: time="2025-09-11T04:45:08.426915219Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb3c588af442085dbf9d804b7b1ef64e4eec2fe4506066176633a27fec2b1501\" id:\"ef0fd088182cca24506e47bc3f10b8c16dde3c1bfc6157cbc07b77cada776588\" pid:4988 exit_status:1 exited_at:{seconds:1757565908 nanos:426425576}" Sep 11 04:45:08.832133 kubelet[2645]: E0911 04:45:08.832044 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:45:09.226139 systemd-networkd[1417]: lxc_health: Link UP Sep 11 04:45:09.226877 systemd-networkd[1417]: lxc_health: Gained carrier Sep 11 04:45:10.052825 kubelet[2645]: E0911 04:45:10.052782 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:45:10.076356 kubelet[2645]: E0911 04:45:10.076325 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:45:10.237364 systemd-networkd[1417]: lxc_health: Gained IPv6LL Sep 11 04:45:10.545958 containerd[1511]: time="2025-09-11T04:45:10.545916901Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb3c588af442085dbf9d804b7b1ef64e4eec2fe4506066176633a27fec2b1501\" id:\"ac68cd38982b0895edd98c179100e120e0699197aae5e412e2bfacfc51e2142e\" pid:5245 exited_at:{seconds:1757565910 nanos:545124097}" Sep 11 04:45:10.833771 kubelet[2645]: E0911 04:45:10.833728 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:45:11.077733 kubelet[2645]: E0911 04:45:11.077699 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:45:11.832571 kubelet[2645]: E0911 04:45:11.832512 2645 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 11 04:45:12.653493 containerd[1511]: time="2025-09-11T04:45:12.653452601Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb3c588af442085dbf9d804b7b1ef64e4eec2fe4506066176633a27fec2b1501\" id:\"a7f8df27d01dc98d4206bf20de60c874ec94304bc9b4f07273f858131d1c7e44\" pid:5273 exited_at:{seconds:1757565912 nanos:653002678}" Sep 11 04:45:12.656120 kubelet[2645]: E0911 04:45:12.656036 2645 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:56630->127.0.0.1:43461: write tcp 127.0.0.1:56630->127.0.0.1:43461: write: broken pipe Sep 11 04:45:14.753102 containerd[1511]: time="2025-09-11T04:45:14.753058047Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eb3c588af442085dbf9d804b7b1ef64e4eec2fe4506066176633a27fec2b1501\" id:\"5c252f89f7facdb8dfdf82cd5dceca91f73a137a7c644d0e392bc8370b7cd70c\" pid:5302 exited_at:{seconds:1757565914 nanos:752598564}" Sep 11 04:45:14.757180 sshd[4447]: Connection closed by 10.0.0.1 port 47246 Sep 11 04:45:14.757658 sshd-session[4439]: pam_unix(sshd:session): session closed for user core Sep 11 04:45:14.761013 systemd[1]: sshd@25-10.0.0.77:22-10.0.0.1:47246.service: Deactivated successfully. Sep 11 04:45:14.762955 systemd[1]: session-26.scope: Deactivated successfully. Sep 11 04:45:14.763798 systemd-logind[1476]: Session 26 logged out. Waiting for processes to exit. Sep 11 04:45:14.764997 systemd-logind[1476]: Removed session 26.