Sep 12 17:31:33.747632 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:31:33.747654 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Sep 12 15:37:01 -00 2025 Sep 12 17:31:33.747663 kernel: KASLR enabled Sep 12 17:31:33.747668 kernel: efi: EFI v2.7 by EDK II Sep 12 17:31:33.747674 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Sep 12 17:31:33.747679 kernel: random: crng init done Sep 12 17:31:33.747686 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 12 17:31:33.747691 kernel: secureboot: Secure boot enabled Sep 12 17:31:33.747697 kernel: ACPI: Early table checksum verification disabled Sep 12 17:31:33.747704 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 12 17:31:33.747710 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:31:33.747716 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:33.747721 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:33.747727 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:33.747734 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:33.747742 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:33.747764 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:33.747771 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:33.747778 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:33.747784 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:31:33.747790 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 17:31:33.747796 kernel: ACPI: Use ACPI SPCR as default console: No Sep 12 17:31:33.747802 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:31:33.747808 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Sep 12 17:31:33.747814 kernel: Zone ranges: Sep 12 17:31:33.747823 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:31:33.747829 kernel: DMA32 empty Sep 12 17:31:33.747835 kernel: Normal empty Sep 12 17:31:33.747841 kernel: Device empty Sep 12 17:31:33.747847 kernel: Movable zone start for each node Sep 12 17:31:33.747853 kernel: Early memory node ranges Sep 12 17:31:33.747859 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 12 17:31:33.747865 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 12 17:31:33.747871 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 12 17:31:33.747877 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 12 17:31:33.747883 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 12 17:31:33.747889 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 12 17:31:33.747896 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 12 17:31:33.747902 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 12 17:31:33.747908 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 17:31:33.747918 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:31:33.747925 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 17:31:33.747931 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 12 17:31:33.747937 kernel: psci: probing for conduit method from ACPI. Sep 12 17:31:33.747945 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:31:33.747952 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:31:33.747958 kernel: psci: Trusted OS migration not required Sep 12 17:31:33.747964 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:31:33.747971 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 17:31:33.747977 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 12 17:31:33.747984 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 12 17:31:33.747990 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 17:31:33.747996 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:31:33.748004 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:31:33.748011 kernel: CPU features: detected: Spectre-v4 Sep 12 17:31:33.748017 kernel: CPU features: detected: Spectre-BHB Sep 12 17:31:33.748023 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:31:33.748030 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:31:33.748036 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:31:33.748043 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:31:33.748049 kernel: alternatives: applying boot alternatives Sep 12 17:31:33.748056 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:31:33.748063 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:31:33.748069 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:31:33.748077 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:31:33.748083 kernel: Fallback order for Node 0: 0 Sep 12 17:31:33.748090 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 12 17:31:33.748096 kernel: Policy zone: DMA Sep 12 17:31:33.748102 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:31:33.748109 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 12 17:31:33.748115 kernel: software IO TLB: area num 4. Sep 12 17:31:33.748121 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 12 17:31:33.748128 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 12 17:31:33.748135 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:31:33.748162 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:31:33.748176 kernel: rcu: RCU event tracing is enabled. Sep 12 17:31:33.748185 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:31:33.748191 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:31:33.748198 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:31:33.748204 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:31:33.748211 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:31:33.748217 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:31:33.748224 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:31:33.748231 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:31:33.748237 kernel: GICv3: 256 SPIs implemented Sep 12 17:31:33.748243 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:31:33.748250 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:31:33.748258 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:31:33.748265 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 12 17:31:33.748271 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 17:31:33.748277 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 17:31:33.748284 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:31:33.748290 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:31:33.748297 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 12 17:31:33.748303 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 12 17:31:33.748310 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:31:33.748316 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:31:33.748323 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:31:33.748329 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:31:33.748337 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:31:33.748344 kernel: arm-pv: using stolen time PV Sep 12 17:31:33.748351 kernel: Console: colour dummy device 80x25 Sep 12 17:31:33.748358 kernel: ACPI: Core revision 20240827 Sep 12 17:31:33.748365 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:31:33.748372 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:31:33.748379 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:31:33.748385 kernel: landlock: Up and running. Sep 12 17:31:33.748392 kernel: SELinux: Initializing. Sep 12 17:31:33.748400 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:31:33.748407 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:31:33.748414 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:31:33.748421 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:31:33.748428 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:31:33.748434 kernel: Remapping and enabling EFI services. Sep 12 17:31:33.748441 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:31:33.748447 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:31:33.748454 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 17:31:33.748463 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 12 17:31:33.748474 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:31:33.748482 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:31:33.748490 kernel: Detected PIPT I-cache on CPU2 Sep 12 17:31:33.748497 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 17:31:33.748505 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 12 17:31:33.748512 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:31:33.748519 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 17:31:33.748526 kernel: Detected PIPT I-cache on CPU3 Sep 12 17:31:33.748534 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 17:31:33.748542 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 12 17:31:33.748549 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:31:33.748555 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 17:31:33.748562 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:31:33.748569 kernel: SMP: Total of 4 processors activated. Sep 12 17:31:33.748576 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:31:33.748589 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:31:33.748597 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:31:33.748606 kernel: CPU features: detected: Common not Private translations Sep 12 17:31:33.748613 kernel: CPU features: detected: CRC32 instructions Sep 12 17:31:33.748620 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 17:31:33.748627 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:31:33.748634 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:31:33.748641 kernel: CPU features: detected: Privileged Access Never Sep 12 17:31:33.748648 kernel: CPU features: detected: RAS Extension Support Sep 12 17:31:33.748655 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 17:31:33.748663 kernel: alternatives: applying system-wide alternatives Sep 12 17:31:33.748671 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 12 17:31:33.748679 kernel: Memory: 2422436K/2572288K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38912K init, 1038K bss, 127516K reserved, 16384K cma-reserved) Sep 12 17:31:33.748686 kernel: devtmpfs: initialized Sep 12 17:31:33.748693 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:31:33.748701 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:31:33.748708 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:31:33.748715 kernel: 0 pages in range for non-PLT usage Sep 12 17:31:33.748722 kernel: 508576 pages in range for PLT usage Sep 12 17:31:33.748729 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:31:33.748737 kernel: SMBIOS 3.0.0 present. Sep 12 17:31:33.748744 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 12 17:31:33.748756 kernel: DMI: Memory slots populated: 1/1 Sep 12 17:31:33.748763 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:31:33.748778 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:31:33.748785 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:31:33.748792 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:31:33.748799 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:31:33.748806 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 12 17:31:33.748815 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:31:33.748822 kernel: cpuidle: using governor menu Sep 12 17:31:33.748829 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:31:33.748836 kernel: ASID allocator initialised with 32768 entries Sep 12 17:31:33.748843 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:31:33.748850 kernel: Serial: AMBA PL011 UART driver Sep 12 17:31:33.748857 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:31:33.748865 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:31:33.748872 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:31:33.748880 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:31:33.748887 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:31:33.748894 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:31:33.748901 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:31:33.748908 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:31:33.748915 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:31:33.748922 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:31:33.748929 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:31:33.748936 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:31:33.748944 kernel: ACPI: Interpreter enabled Sep 12 17:31:33.748951 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:31:33.748958 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:31:33.748965 kernel: ACPI: CPU0 has been hot-added Sep 12 17:31:33.748972 kernel: ACPI: CPU1 has been hot-added Sep 12 17:31:33.748979 kernel: ACPI: CPU2 has been hot-added Sep 12 17:31:33.748986 kernel: ACPI: CPU3 has been hot-added Sep 12 17:31:33.748993 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:31:33.749000 kernel: printk: legacy console [ttyAMA0] enabled Sep 12 17:31:33.749009 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:31:33.749146 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:31:33.749213 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:31:33.749273 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:31:33.749332 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 17:31:33.749389 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 17:31:33.749398 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 17:31:33.749407 kernel: PCI host bridge to bus 0000:00 Sep 12 17:31:33.749478 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 17:31:33.749534 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:31:33.749599 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 17:31:33.749656 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:31:33.749737 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 12 17:31:33.749829 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 17:31:33.749897 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 12 17:31:33.749960 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 12 17:31:33.750038 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:31:33.750099 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 12 17:31:33.750161 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 12 17:31:33.750221 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 12 17:31:33.750276 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 17:31:33.750331 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:31:33.750384 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 17:31:33.750393 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:31:33.750400 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:31:33.750407 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:31:33.750414 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:31:33.750421 kernel: iommu: Default domain type: Translated Sep 12 17:31:33.750428 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:31:33.750437 kernel: efivars: Registered efivars operations Sep 12 17:31:33.750444 kernel: vgaarb: loaded Sep 12 17:31:33.750451 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:31:33.750457 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:31:33.750464 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:31:33.750471 kernel: pnp: PnP ACPI init Sep 12 17:31:33.750534 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 17:31:33.750544 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:31:33.750553 kernel: NET: Registered PF_INET protocol family Sep 12 17:31:33.750560 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:31:33.750567 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:31:33.750574 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:31:33.750588 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:31:33.750596 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:31:33.750603 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:31:33.750612 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:31:33.750621 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:31:33.750631 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:31:33.750638 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:31:33.750645 kernel: kvm [1]: HYP mode not available Sep 12 17:31:33.750651 kernel: Initialise system trusted keyrings Sep 12 17:31:33.750658 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:31:33.750665 kernel: Key type asymmetric registered Sep 12 17:31:33.750672 kernel: Asymmetric key parser 'x509' registered Sep 12 17:31:33.750679 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 12 17:31:33.750686 kernel: io scheduler mq-deadline registered Sep 12 17:31:33.750694 kernel: io scheduler kyber registered Sep 12 17:31:33.750701 kernel: io scheduler bfq registered Sep 12 17:31:33.750708 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:31:33.750715 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:31:33.750722 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:31:33.750817 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 17:31:33.750827 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:31:33.750835 kernel: thunder_xcv, ver 1.0 Sep 12 17:31:33.750841 kernel: thunder_bgx, ver 1.0 Sep 12 17:31:33.750851 kernel: nicpf, ver 1.0 Sep 12 17:31:33.750858 kernel: nicvf, ver 1.0 Sep 12 17:31:33.750927 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:31:33.750989 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:31:33 UTC (1757698293) Sep 12 17:31:33.750998 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:31:33.751005 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 12 17:31:33.751012 kernel: watchdog: NMI not fully supported Sep 12 17:31:33.751019 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:31:33.751027 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:31:33.751034 kernel: Segment Routing with IPv6 Sep 12 17:31:33.751041 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:31:33.751048 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:31:33.751055 kernel: Key type dns_resolver registered Sep 12 17:31:33.751062 kernel: registered taskstats version 1 Sep 12 17:31:33.751068 kernel: Loading compiled-in X.509 certificates Sep 12 17:31:33.751075 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 7675c1947f324bc6524fdc1ee0f8f5f343acfea7' Sep 12 17:31:33.751082 kernel: Demotion targets for Node 0: null Sep 12 17:31:33.751091 kernel: Key type .fscrypt registered Sep 12 17:31:33.751098 kernel: Key type fscrypt-provisioning registered Sep 12 17:31:33.751105 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:31:33.751112 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:31:33.751119 kernel: ima: No architecture policies found Sep 12 17:31:33.751126 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:31:33.751133 kernel: clk: Disabling unused clocks Sep 12 17:31:33.751140 kernel: PM: genpd: Disabling unused power domains Sep 12 17:31:33.751147 kernel: Warning: unable to open an initial console. Sep 12 17:31:33.751155 kernel: Freeing unused kernel memory: 38912K Sep 12 17:31:33.751162 kernel: Run /init as init process Sep 12 17:31:33.751169 kernel: with arguments: Sep 12 17:31:33.751176 kernel: /init Sep 12 17:31:33.751182 kernel: with environment: Sep 12 17:31:33.751189 kernel: HOME=/ Sep 12 17:31:33.751196 kernel: TERM=linux Sep 12 17:31:33.751203 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:31:33.751210 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:31:33.751221 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:31:33.751230 systemd[1]: Detected virtualization kvm. Sep 12 17:31:33.751237 systemd[1]: Detected architecture arm64. Sep 12 17:31:33.751244 systemd[1]: Running in initrd. Sep 12 17:31:33.751252 systemd[1]: No hostname configured, using default hostname. Sep 12 17:31:33.751260 systemd[1]: Hostname set to . Sep 12 17:31:33.751267 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:31:33.751276 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:31:33.751283 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:31:33.751291 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:31:33.751299 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:31:33.751307 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:31:33.751314 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:31:33.751322 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:31:33.751332 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:31:33.751340 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:31:33.751348 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:31:33.751355 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:31:33.751362 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:31:33.751370 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:31:33.751377 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:31:33.751385 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:31:33.751393 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:31:33.751401 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:31:33.751409 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:31:33.751416 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:31:33.751424 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:31:33.751432 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:31:33.751440 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:31:33.751447 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:31:33.751455 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:31:33.751464 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:31:33.751471 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:31:33.751479 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:31:33.751487 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:31:33.751495 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:31:33.751503 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:31:33.751510 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:31:33.751518 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:31:33.751527 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:31:33.751535 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:31:33.751543 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:31:33.751566 systemd-journald[244]: Collecting audit messages is disabled. Sep 12 17:31:33.751596 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:31:33.751605 systemd-journald[244]: Journal started Sep 12 17:31:33.751625 systemd-journald[244]: Runtime Journal (/run/log/journal/b726121078834f548d66554f18a54042) is 6M, max 48.5M, 42.4M free. Sep 12 17:31:33.745182 systemd-modules-load[246]: Inserted module 'overlay' Sep 12 17:31:33.753744 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:31:33.754853 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:31:33.759767 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:31:33.760226 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:31:33.762857 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 12 17:31:33.763783 kernel: Bridge firewalling registered Sep 12 17:31:33.763904 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:31:33.774366 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:31:33.775512 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:31:33.778900 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:31:33.783052 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:31:33.785782 systemd-tmpfiles[268]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:31:33.789495 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:31:33.790796 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:31:33.793936 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:31:33.796844 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:31:33.807602 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:31:33.831532 systemd-resolved[287]: Positive Trust Anchors: Sep 12 17:31:33.831553 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:31:33.831596 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:31:33.836589 systemd-resolved[287]: Defaulting to hostname 'linux'. Sep 12 17:31:33.839685 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:31:33.837764 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:31:33.839201 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:31:33.910778 kernel: SCSI subsystem initialized Sep 12 17:31:33.914768 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:31:33.921770 kernel: iscsi: registered transport (tcp) Sep 12 17:31:33.934771 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:31:33.934796 kernel: QLogic iSCSI HBA Driver Sep 12 17:31:33.951320 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:31:33.971459 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:31:33.973487 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:31:34.019460 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:31:34.021651 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:31:34.083781 kernel: raid6: neonx8 gen() 15799 MB/s Sep 12 17:31:34.100766 kernel: raid6: neonx4 gen() 15808 MB/s Sep 12 17:31:34.117765 kernel: raid6: neonx2 gen() 13226 MB/s Sep 12 17:31:34.134770 kernel: raid6: neonx1 gen() 10445 MB/s Sep 12 17:31:34.151765 kernel: raid6: int64x8 gen() 6895 MB/s Sep 12 17:31:34.168765 kernel: raid6: int64x4 gen() 7360 MB/s Sep 12 17:31:34.185765 kernel: raid6: int64x2 gen() 6108 MB/s Sep 12 17:31:34.202764 kernel: raid6: int64x1 gen() 5056 MB/s Sep 12 17:31:34.202780 kernel: raid6: using algorithm neonx4 gen() 15808 MB/s Sep 12 17:31:34.219775 kernel: raid6: .... xor() 12343 MB/s, rmw enabled Sep 12 17:31:34.219799 kernel: raid6: using neon recovery algorithm Sep 12 17:31:34.225027 kernel: xor: measuring software checksum speed Sep 12 17:31:34.225055 kernel: 8regs : 21607 MB/sec Sep 12 17:31:34.226146 kernel: 32regs : 21681 MB/sec Sep 12 17:31:34.226161 kernel: arm64_neon : 27268 MB/sec Sep 12 17:31:34.226170 kernel: xor: using function: arm64_neon (27268 MB/sec) Sep 12 17:31:34.278791 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:31:34.285624 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:31:34.288233 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:31:34.318502 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 12 17:31:34.322670 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:31:34.325004 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:31:34.350177 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Sep 12 17:31:34.374294 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:31:34.376797 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:31:34.437291 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:31:34.440458 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:31:34.491945 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 17:31:34.492156 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:31:34.508049 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:31:34.508112 kernel: GPT:9289727 != 19775487 Sep 12 17:31:34.508122 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:31:34.508131 kernel: GPT:9289727 != 19775487 Sep 12 17:31:34.509051 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:31:34.509631 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:31:34.513569 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:31:34.513710 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:31:34.516851 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:31:34.519924 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:31:34.541074 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:31:34.547821 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:31:34.551295 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:31:34.559866 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:31:34.573108 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:31:34.579960 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:31:34.580895 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:31:34.582798 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:31:34.585134 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:31:34.586668 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:31:34.589109 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:31:34.590869 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:31:34.615688 disk-uuid[593]: Primary Header is updated. Sep 12 17:31:34.615688 disk-uuid[593]: Secondary Entries is updated. Sep 12 17:31:34.615688 disk-uuid[593]: Secondary Header is updated. Sep 12 17:31:34.619781 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:31:34.622466 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:31:34.625782 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:31:35.626794 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:31:35.626877 disk-uuid[597]: The operation has completed successfully. Sep 12 17:31:35.653680 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:31:35.653809 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:31:35.677265 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:31:35.694735 sh[612]: Success Sep 12 17:31:35.706910 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:31:35.706955 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:31:35.707892 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:31:35.716844 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 12 17:31:35.743495 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:31:35.746611 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:31:35.759720 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:31:35.765924 kernel: BTRFS: device fsid 752cb955-bdfa-486a-ad02-b54d5e61d194 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (624) Sep 12 17:31:35.765953 kernel: BTRFS info (device dm-0): first mount of filesystem 752cb955-bdfa-486a-ad02-b54d5e61d194 Sep 12 17:31:35.765972 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:31:35.770774 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:31:35.770793 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:31:35.771263 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:31:35.772378 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:31:35.773476 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:31:35.774271 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:31:35.777048 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:31:35.800778 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (655) Sep 12 17:31:35.802765 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:31:35.802807 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:31:35.805062 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:31:35.805096 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:31:35.808778 kernel: BTRFS info (device vda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:31:35.810270 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:31:35.812026 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:31:35.890549 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:31:35.892976 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:31:35.916929 ignition[699]: Ignition 2.21.0 Sep 12 17:31:35.916941 ignition[699]: Stage: fetch-offline Sep 12 17:31:35.916968 ignition[699]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:31:35.916976 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:31:35.917138 ignition[699]: parsed url from cmdline: "" Sep 12 17:31:35.917141 ignition[699]: no config URL provided Sep 12 17:31:35.917145 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:31:35.917151 ignition[699]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:31:35.917171 ignition[699]: op(1): [started] loading QEMU firmware config module Sep 12 17:31:35.917175 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:31:35.927008 ignition[699]: op(1): [finished] loading QEMU firmware config module Sep 12 17:31:35.939215 systemd-networkd[802]: lo: Link UP Sep 12 17:31:35.939228 systemd-networkd[802]: lo: Gained carrier Sep 12 17:31:35.939899 systemd-networkd[802]: Enumeration completed Sep 12 17:31:35.940044 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:31:35.940584 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:31:35.940587 systemd-networkd[802]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:31:35.941225 systemd[1]: Reached target network.target - Network. Sep 12 17:31:35.941232 systemd-networkd[802]: eth0: Link UP Sep 12 17:31:35.941377 systemd-networkd[802]: eth0: Gained carrier Sep 12 17:31:35.941386 systemd-networkd[802]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:31:35.974812 systemd-networkd[802]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:31:35.974940 ignition[699]: parsing config with SHA512: 3d70ba69906c8beb851b37597fdaa5854f3dc8e3ccdef3ce53d5b65d4b50a96df68f50e8d54f8918bfc28735e4cb8e11b4a3e64d7416b975c3537a5a156671d9 Sep 12 17:31:35.980553 unknown[699]: fetched base config from "system" Sep 12 17:31:35.980564 unknown[699]: fetched user config from "qemu" Sep 12 17:31:35.980973 ignition[699]: fetch-offline: fetch-offline passed Sep 12 17:31:35.981029 ignition[699]: Ignition finished successfully Sep 12 17:31:35.984298 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:31:35.985335 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:31:35.986061 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:31:36.008334 ignition[810]: Ignition 2.21.0 Sep 12 17:31:36.008352 ignition[810]: Stage: kargs Sep 12 17:31:36.008490 ignition[810]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:31:36.008498 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:31:36.010074 ignition[810]: kargs: kargs passed Sep 12 17:31:36.010144 ignition[810]: Ignition finished successfully Sep 12 17:31:36.012977 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:31:36.014859 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:31:36.042236 ignition[817]: Ignition 2.21.0 Sep 12 17:31:36.042255 ignition[817]: Stage: disks Sep 12 17:31:36.042382 ignition[817]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:31:36.042391 ignition[817]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:31:36.044037 ignition[817]: disks: disks passed Sep 12 17:31:36.044087 ignition[817]: Ignition finished successfully Sep 12 17:31:36.046142 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:31:36.047292 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:31:36.048409 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:31:36.049960 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:31:36.051489 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:31:36.052955 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:31:36.055079 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:31:36.079874 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 17:31:36.083949 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:31:36.085836 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:31:36.140772 kernel: EXT4-fs (vda9): mounted filesystem c902100c-52b7-422c-84ac-d834d4db2717 r/w with ordered data mode. Quota mode: none. Sep 12 17:31:36.141496 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:31:36.142628 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:31:36.144656 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:31:36.146139 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:31:36.146944 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:31:36.146983 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:31:36.147005 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:31:36.159184 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:31:36.161406 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:31:36.165832 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (836) Sep 12 17:31:36.165853 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:31:36.165863 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:31:36.167053 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:31:36.167087 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:31:36.168592 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:31:36.194645 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:31:36.198475 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:31:36.202397 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:31:36.206003 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:31:36.269393 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:31:36.271174 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:31:36.273221 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:31:36.302818 kernel: BTRFS info (device vda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:31:36.322900 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:31:36.334608 ignition[951]: INFO : Ignition 2.21.0 Sep 12 17:31:36.334608 ignition[951]: INFO : Stage: mount Sep 12 17:31:36.336277 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:31:36.336277 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:31:36.339368 ignition[951]: INFO : mount: mount passed Sep 12 17:31:36.339368 ignition[951]: INFO : Ignition finished successfully Sep 12 17:31:36.340079 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:31:36.341808 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:31:36.764895 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:31:36.766379 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:31:36.783902 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (963) Sep 12 17:31:36.783942 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:31:36.783953 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:31:36.789333 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:31:36.789383 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:31:36.790689 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:31:36.817015 ignition[980]: INFO : Ignition 2.21.0 Sep 12 17:31:36.817015 ignition[980]: INFO : Stage: files Sep 12 17:31:36.819812 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:31:36.819812 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:31:36.819812 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:31:36.824145 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:31:36.824145 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:31:36.826969 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:31:36.826969 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:31:36.826969 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:31:36.826969 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 17:31:36.824861 unknown[980]: wrote ssh authorized keys file for user: core Sep 12 17:31:36.837362 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Sep 12 17:31:36.884116 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:31:37.171430 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Sep 12 17:31:37.171430 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:31:37.174933 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 17:31:37.384847 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:31:37.458642 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:31:37.460132 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:31:37.460132 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:31:37.460132 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:31:37.460132 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:31:37.460132 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:31:37.460132 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:31:37.460132 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:31:37.460132 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:31:37.472154 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:31:37.472154 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:31:37.472154 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:31:37.469155 systemd-networkd[802]: eth0: Gained IPv6LL Sep 12 17:31:37.477676 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:31:37.477676 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:31:37.477676 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Sep 12 17:31:37.811885 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:31:38.044797 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Sep 12 17:31:38.044797 ignition[980]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:31:38.049335 ignition[980]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:31:38.052771 ignition[980]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:31:38.052771 ignition[980]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:31:38.052771 ignition[980]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 17:31:38.052771 ignition[980]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:31:38.052771 ignition[980]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:31:38.052771 ignition[980]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 17:31:38.052771 ignition[980]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:31:38.067861 ignition[980]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:31:38.071007 ignition[980]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:31:38.073426 ignition[980]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:31:38.073426 ignition[980]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:31:38.073426 ignition[980]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:31:38.073426 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:31:38.073426 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:31:38.073426 ignition[980]: INFO : files: files passed Sep 12 17:31:38.073426 ignition[980]: INFO : Ignition finished successfully Sep 12 17:31:38.074204 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:31:38.076365 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:31:38.079892 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:31:38.097098 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:31:38.097204 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:31:38.098772 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:31:38.101227 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:31:38.101227 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:31:38.103951 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:31:38.103813 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:31:38.106103 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:31:38.108239 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:31:38.162573 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:31:38.163519 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:31:38.165012 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:31:38.166527 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:31:38.167973 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:31:38.178303 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:31:38.186217 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:31:38.188825 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:31:38.210974 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:31:38.212862 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:31:38.213792 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:31:38.216284 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:31:38.216423 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:31:38.219057 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:31:38.220696 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:31:38.221507 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:31:38.223747 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:31:38.224650 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:31:38.232273 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:31:38.233853 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:31:38.235244 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:31:38.236819 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:31:38.238711 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:31:38.240331 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:31:38.241486 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:31:38.241623 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:31:38.244393 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:31:38.246050 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:31:38.246944 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:31:38.247850 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:31:38.249331 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:31:38.249453 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:31:38.251877 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:31:38.251992 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:31:38.253662 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:31:38.254882 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:31:38.259844 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:31:38.260958 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:31:38.262692 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:31:38.264067 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:31:38.264160 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:31:38.265517 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:31:38.265613 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:31:38.266972 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:31:38.267398 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:31:38.269660 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:31:38.269785 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:31:38.272032 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:31:38.275666 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:31:38.275828 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:31:38.294500 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:31:38.295271 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:31:38.295396 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:31:38.296817 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:31:38.296923 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:31:38.304009 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:31:38.304121 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:31:38.313641 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:31:38.317052 ignition[1035]: INFO : Ignition 2.21.0 Sep 12 17:31:38.317052 ignition[1035]: INFO : Stage: umount Sep 12 17:31:38.318313 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:31:38.318313 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:31:38.320950 ignition[1035]: INFO : umount: umount passed Sep 12 17:31:38.320950 ignition[1035]: INFO : Ignition finished successfully Sep 12 17:31:38.321866 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:31:38.323126 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:31:38.324180 systemd[1]: Stopped target network.target - Network. Sep 12 17:31:38.325302 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:31:38.325365 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:31:38.326147 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:31:38.326186 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:31:38.327673 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:31:38.327721 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:31:38.329245 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:31:38.329286 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:31:38.330406 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:31:38.332098 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:31:38.339743 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:31:38.339893 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:31:38.343506 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:31:38.343818 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:31:38.343862 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:31:38.346552 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:31:38.346877 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:31:38.346964 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:31:38.349744 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:31:38.350166 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:31:38.351059 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:31:38.351092 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:31:38.353229 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:31:38.354693 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:31:38.354745 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:31:38.356351 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:31:38.356389 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:31:38.360182 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:31:38.360223 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:31:38.361189 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:31:38.365498 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:31:38.374118 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:31:38.374247 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:31:38.375486 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:31:38.375530 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:31:38.377455 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:31:38.378958 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:31:38.381500 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:31:38.381574 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:31:38.382633 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:31:38.382662 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:31:38.383576 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:31:38.383624 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:31:38.388341 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:31:38.388390 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:31:38.389282 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:31:38.389335 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:31:38.392370 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:31:38.393306 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:31:38.393362 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:31:38.397218 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:31:38.397263 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:31:38.399847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:31:38.399958 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:31:38.402994 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:31:38.408456 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:31:38.413199 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:31:38.413301 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:31:38.414794 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:31:38.417066 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:31:38.431255 systemd[1]: Switching root. Sep 12 17:31:38.465660 systemd-journald[244]: Journal stopped Sep 12 17:31:39.294880 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Sep 12 17:31:39.294941 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:31:39.294953 kernel: SELinux: policy capability open_perms=1 Sep 12 17:31:39.294968 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:31:39.294982 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:31:39.294993 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:31:39.295002 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:31:39.295011 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:31:39.295020 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:31:39.295029 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:31:39.295038 kernel: audit: type=1403 audit(1757698298.666:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:31:39.295051 systemd[1]: Successfully loaded SELinux policy in 54.069ms. Sep 12 17:31:39.295070 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.544ms. Sep 12 17:31:39.295081 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:31:39.295092 systemd[1]: Detected virtualization kvm. Sep 12 17:31:39.295102 systemd[1]: Detected architecture arm64. Sep 12 17:31:39.295111 systemd[1]: Detected first boot. Sep 12 17:31:39.295121 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:31:39.295131 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:31:39.295141 zram_generator::config[1083]: No configuration found. Sep 12 17:31:39.295153 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:31:39.295165 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:31:39.295175 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:31:39.295185 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:31:39.295195 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:31:39.295206 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:31:39.295216 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:31:39.295225 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:31:39.295235 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:31:39.295247 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:31:39.295258 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:31:39.295268 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:31:39.295278 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:31:39.295288 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:31:39.295298 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:31:39.295309 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:31:39.295319 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:31:39.295330 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:31:39.295343 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:31:39.295353 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:31:39.295363 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:31:39.295373 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:31:39.295383 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:31:39.295393 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:31:39.295403 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:31:39.295414 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:31:39.295424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:31:39.295434 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:31:39.295444 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:31:39.295455 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:31:39.295465 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:31:39.295475 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:31:39.295485 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:31:39.295495 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:31:39.295505 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:31:39.295517 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:31:39.295528 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:31:39.295538 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:31:39.295548 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:31:39.295573 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:31:39.295587 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:31:39.295597 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:31:39.295608 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:31:39.295619 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:31:39.295632 systemd[1]: Reached target machines.target - Containers. Sep 12 17:31:39.295642 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:31:39.295653 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:31:39.295663 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:31:39.295673 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:31:39.295683 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:31:39.295693 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:31:39.295703 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:31:39.295714 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:31:39.295724 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:31:39.295735 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:31:39.295744 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:31:39.295773 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:31:39.295783 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:31:39.295793 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:31:39.295804 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:31:39.295816 kernel: fuse: init (API version 7.41) Sep 12 17:31:39.295825 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:31:39.295835 kernel: ACPI: bus type drm_connector registered Sep 12 17:31:39.295844 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:31:39.295854 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:31:39.295864 kernel: loop: module loaded Sep 12 17:31:39.295874 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:31:39.295885 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:31:39.295895 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:31:39.295906 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:31:39.295916 systemd[1]: Stopped verity-setup.service. Sep 12 17:31:39.295926 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:31:39.295935 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:31:39.295945 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:31:39.295956 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:31:39.295966 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:31:39.295977 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:31:39.296015 systemd-journald[1151]: Collecting audit messages is disabled. Sep 12 17:31:39.296038 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:31:39.296057 systemd-journald[1151]: Journal started Sep 12 17:31:39.296079 systemd-journald[1151]: Runtime Journal (/run/log/journal/b726121078834f548d66554f18a54042) is 6M, max 48.5M, 42.4M free. Sep 12 17:31:39.061138 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:31:39.091968 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:31:39.092430 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:31:39.297780 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:31:39.299985 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:31:39.301795 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:31:39.302908 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:31:39.303074 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:31:39.304231 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:31:39.305791 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:31:39.306925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:31:39.307081 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:31:39.308408 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:31:39.308617 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:31:39.309746 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:31:39.309949 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:31:39.311103 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:31:39.312483 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:31:39.315784 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:31:39.317029 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:31:39.323320 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:31:39.329463 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:31:39.331540 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:31:39.333430 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:31:39.334476 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:31:39.334510 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:31:39.336258 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:31:39.343626 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:31:39.344654 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:31:39.345811 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:31:39.349640 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:31:39.350914 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:31:39.351916 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:31:39.353758 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:31:39.356909 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:31:39.359093 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:31:39.360200 systemd-journald[1151]: Time spent on flushing to /var/log/journal/b726121078834f548d66554f18a54042 is 21.504ms for 886 entries. Sep 12 17:31:39.360200 systemd-journald[1151]: System Journal (/var/log/journal/b726121078834f548d66554f18a54042) is 8M, max 195.6M, 187.6M free. Sep 12 17:31:39.398506 systemd-journald[1151]: Received client request to flush runtime journal. Sep 12 17:31:39.398577 kernel: loop0: detected capacity change from 0 to 207008 Sep 12 17:31:39.398605 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:31:39.362598 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:31:39.365712 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:31:39.367402 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:31:39.371029 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:31:39.373141 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:31:39.380866 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:31:39.389908 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:31:39.400928 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:31:39.404937 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:31:39.421891 kernel: loop1: detected capacity change from 0 to 119320 Sep 12 17:31:39.421802 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:31:39.424964 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:31:39.430832 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:31:39.449806 kernel: loop2: detected capacity change from 0 to 100608 Sep 12 17:31:39.455928 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Sep 12 17:31:39.455947 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Sep 12 17:31:39.461384 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:31:39.477804 kernel: loop3: detected capacity change from 0 to 207008 Sep 12 17:31:39.483806 kernel: loop4: detected capacity change from 0 to 119320 Sep 12 17:31:39.488776 kernel: loop5: detected capacity change from 0 to 100608 Sep 12 17:31:39.492992 (sd-merge)[1223]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:31:39.493364 (sd-merge)[1223]: Merged extensions into '/usr'. Sep 12 17:31:39.496607 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:31:39.496625 systemd[1]: Reloading... Sep 12 17:31:39.559948 zram_generator::config[1252]: No configuration found. Sep 12 17:31:39.645909 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:31:39.705207 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:31:39.705331 systemd[1]: Reloading finished in 208 ms. Sep 12 17:31:39.732465 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:31:39.733887 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:31:39.748005 systemd[1]: Starting ensure-sysext.service... Sep 12 17:31:39.749798 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:31:39.758276 systemd[1]: Reload requested from client PID 1283 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:31:39.758295 systemd[1]: Reloading... Sep 12 17:31:39.767169 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:31:39.767209 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:31:39.767483 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:31:39.767699 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:31:39.768390 systemd-tmpfiles[1284]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:31:39.768633 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Sep 12 17:31:39.768685 systemd-tmpfiles[1284]: ACLs are not supported, ignoring. Sep 12 17:31:39.771396 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:31:39.771410 systemd-tmpfiles[1284]: Skipping /boot Sep 12 17:31:39.777151 systemd-tmpfiles[1284]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:31:39.777168 systemd-tmpfiles[1284]: Skipping /boot Sep 12 17:31:39.802779 zram_generator::config[1311]: No configuration found. Sep 12 17:31:39.939061 systemd[1]: Reloading finished in 180 ms. Sep 12 17:31:39.962439 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:31:39.968935 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:31:39.974771 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:31:39.977663 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:31:39.992544 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:31:39.995640 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:31:39.997910 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:31:40.000104 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:31:40.007156 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:31:40.009994 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:31:40.011361 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:31:40.016307 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:31:40.025073 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:31:40.026691 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:31:40.026838 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:31:40.027870 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:31:40.030299 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:31:40.030486 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:31:40.032444 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:31:40.032631 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:31:40.037060 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:31:40.037252 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:31:40.042837 systemd-udevd[1352]: Using default interface naming scheme 'v255'. Sep 12 17:31:40.044080 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:31:40.046056 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:31:40.049976 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:31:40.055051 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:31:40.056219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:31:40.056332 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:31:40.058093 augenrules[1383]: No rules Sep 12 17:31:40.065717 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:31:40.067462 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:31:40.069345 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:31:40.071413 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:31:40.071654 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:31:40.073078 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:31:40.075413 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:31:40.077893 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:31:40.078083 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:31:40.079460 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:31:40.079634 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:31:40.083687 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:31:40.083865 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:31:40.085849 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:31:40.118889 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:31:40.119698 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:31:40.120919 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:31:40.123399 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:31:40.130891 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:31:40.134915 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:31:40.136076 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:31:40.136124 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:31:40.137556 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:31:40.138411 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:31:40.140777 systemd[1]: Finished ensure-sysext.service. Sep 12 17:31:40.141921 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:31:40.142231 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:31:40.151414 augenrules[1428]: /sbin/augenrules: No change Sep 12 17:31:40.153009 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:31:40.154230 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:31:40.154467 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:31:40.170124 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:31:40.170336 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:31:40.176870 augenrules[1457]: No rules Sep 12 17:31:40.177345 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:31:40.180555 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:31:40.184092 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:31:40.189251 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:31:40.189435 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:31:40.192870 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:31:40.196837 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 17:31:40.206907 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:31:40.210688 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:31:40.211948 systemd-resolved[1351]: Positive Trust Anchors: Sep 12 17:31:40.211966 systemd-resolved[1351]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:31:40.211998 systemd-resolved[1351]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:31:40.218341 systemd-resolved[1351]: Defaulting to hostname 'linux'. Sep 12 17:31:40.219557 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:31:40.220533 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:31:40.235779 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:31:40.240017 systemd-networkd[1434]: lo: Link UP Sep 12 17:31:40.240025 systemd-networkd[1434]: lo: Gained carrier Sep 12 17:31:40.240831 systemd-networkd[1434]: Enumeration completed Sep 12 17:31:40.240934 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:31:40.241251 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:31:40.241261 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:31:40.241937 systemd[1]: Reached target network.target - Network. Sep 12 17:31:40.242050 systemd-networkd[1434]: eth0: Link UP Sep 12 17:31:40.242165 systemd-networkd[1434]: eth0: Gained carrier Sep 12 17:31:40.242184 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:31:40.244042 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:31:40.248976 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:31:40.249959 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:31:40.251058 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:31:40.254005 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:31:40.254928 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:31:40.255895 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:31:40.256800 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:31:40.256834 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:31:40.257510 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:31:40.258514 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:31:40.259479 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:31:40.260621 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:31:40.262215 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:31:40.264857 systemd-networkd[1434]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:31:40.265966 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:31:40.266921 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Sep 12 17:31:40.267927 systemd-timesyncd[1445]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:31:40.268126 systemd-timesyncd[1445]: Initial clock synchronization to Fri 2025-09-12 17:31:40.479735 UTC. Sep 12 17:31:40.269898 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:31:40.270978 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:31:40.271959 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:31:40.274591 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:31:40.275964 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:31:40.277505 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:31:40.278569 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:31:40.279356 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:31:40.280145 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:31:40.280173 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:31:40.281087 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:31:40.282915 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:31:40.284909 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:31:40.286932 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:31:40.290921 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:31:40.291659 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:31:40.292763 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:31:40.295941 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:31:40.297327 jq[1483]: false Sep 12 17:31:40.297933 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:31:40.299973 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:31:40.302966 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:31:40.304649 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:31:40.305068 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:31:40.306001 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:31:40.336058 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:31:40.339783 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:31:40.344145 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:31:40.345421 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:31:40.345602 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:31:40.345845 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:31:40.346008 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:31:40.347704 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:31:40.347895 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:31:40.356848 jq[1510]: true Sep 12 17:31:40.368775 jq[1526]: true Sep 12 17:31:40.371349 extend-filesystems[1484]: Found /dev/vda6 Sep 12 17:31:40.373238 update_engine[1498]: I20250912 17:31:40.369818 1498 main.cc:92] Flatcar Update Engine starting Sep 12 17:31:40.373901 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:31:40.374008 (ntainerd)[1527]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:31:40.377091 extend-filesystems[1484]: Found /dev/vda9 Sep 12 17:31:40.380647 extend-filesystems[1484]: Checking size of /dev/vda9 Sep 12 17:31:40.384076 tar[1517]: linux-arm64/LICENSE Sep 12 17:31:40.384302 tar[1517]: linux-arm64/helm Sep 12 17:31:40.392805 dbus-daemon[1481]: [system] SELinux support is enabled Sep 12 17:31:40.392976 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:31:40.395613 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:31:40.395646 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:31:40.397123 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:31:40.397148 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:31:40.399354 extend-filesystems[1484]: Resized partition /dev/vda9 Sep 12 17:31:40.404785 extend-filesystems[1545]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 17:31:40.410890 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:31:40.411036 update_engine[1498]: I20250912 17:31:40.410993 1498 update_check_scheduler.cc:74] Next update check in 11m29s Sep 12 17:31:40.411764 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:31:40.421818 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:31:40.435771 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:31:40.451078 extend-filesystems[1545]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:31:40.451078 extend-filesystems[1545]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:31:40.451078 extend-filesystems[1545]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:31:40.461277 extend-filesystems[1484]: Resized filesystem in /dev/vda9 Sep 12 17:31:40.452010 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:31:40.466954 bash[1557]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:31:40.452214 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:31:40.490060 systemd-logind[1496]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:31:40.491970 systemd-logind[1496]: New seat seat0. Sep 12 17:31:40.502142 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:31:40.504150 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:31:40.509271 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:31:40.537149 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:31:40.557841 locksmithd[1549]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:31:40.566773 containerd[1527]: time="2025-09-12T17:31:40Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:31:40.573770 containerd[1527]: time="2025-09-12T17:31:40.570266040Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:31:40.585933 containerd[1527]: time="2025-09-12T17:31:40.585888480Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.76µs" Sep 12 17:31:40.585933 containerd[1527]: time="2025-09-12T17:31:40.585929200Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:31:40.585997 containerd[1527]: time="2025-09-12T17:31:40.585951040Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:31:40.586137 containerd[1527]: time="2025-09-12T17:31:40.586115440Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:31:40.586162 containerd[1527]: time="2025-09-12T17:31:40.586137960Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:31:40.586179 containerd[1527]: time="2025-09-12T17:31:40.586162800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:31:40.586233 containerd[1527]: time="2025-09-12T17:31:40.586214240Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:31:40.586257 containerd[1527]: time="2025-09-12T17:31:40.586232840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:31:40.586475 containerd[1527]: time="2025-09-12T17:31:40.586449440Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:31:40.586475 containerd[1527]: time="2025-09-12T17:31:40.586472360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:31:40.586517 containerd[1527]: time="2025-09-12T17:31:40.586484600Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:31:40.586517 containerd[1527]: time="2025-09-12T17:31:40.586492960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:31:40.586593 containerd[1527]: time="2025-09-12T17:31:40.586573080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:31:40.586831 containerd[1527]: time="2025-09-12T17:31:40.586810400Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:31:40.586861 containerd[1527]: time="2025-09-12T17:31:40.586848160Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:31:40.586861 containerd[1527]: time="2025-09-12T17:31:40.586858520Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:31:40.586919 containerd[1527]: time="2025-09-12T17:31:40.586901960Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:31:40.587282 containerd[1527]: time="2025-09-12T17:31:40.587259480Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:31:40.587354 containerd[1527]: time="2025-09-12T17:31:40.587335840Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:31:40.590822 containerd[1527]: time="2025-09-12T17:31:40.590790720Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:31:40.590875 containerd[1527]: time="2025-09-12T17:31:40.590861000Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:31:40.590912 containerd[1527]: time="2025-09-12T17:31:40.590877680Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:31:40.590912 containerd[1527]: time="2025-09-12T17:31:40.590891160Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:31:40.590912 containerd[1527]: time="2025-09-12T17:31:40.590903920Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:31:40.590962 containerd[1527]: time="2025-09-12T17:31:40.590918200Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:31:40.590979 containerd[1527]: time="2025-09-12T17:31:40.590962680Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:31:40.590996 containerd[1527]: time="2025-09-12T17:31:40.590977720Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:31:40.591019 containerd[1527]: time="2025-09-12T17:31:40.590993360Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:31:40.591019 containerd[1527]: time="2025-09-12T17:31:40.591004960Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:31:40.591019 containerd[1527]: time="2025-09-12T17:31:40.591014160Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:31:40.591066 containerd[1527]: time="2025-09-12T17:31:40.591025880Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:31:40.591165 containerd[1527]: time="2025-09-12T17:31:40.591140040Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:31:40.591191 containerd[1527]: time="2025-09-12T17:31:40.591169120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:31:40.591191 containerd[1527]: time="2025-09-12T17:31:40.591187680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:31:40.591232 containerd[1527]: time="2025-09-12T17:31:40.591200000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:31:40.591232 containerd[1527]: time="2025-09-12T17:31:40.591210800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:31:40.591232 containerd[1527]: time="2025-09-12T17:31:40.591221520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:31:40.591281 containerd[1527]: time="2025-09-12T17:31:40.591232560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:31:40.591281 containerd[1527]: time="2025-09-12T17:31:40.591243000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:31:40.591281 containerd[1527]: time="2025-09-12T17:31:40.591253680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:31:40.591281 containerd[1527]: time="2025-09-12T17:31:40.591263400Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:31:40.591281 containerd[1527]: time="2025-09-12T17:31:40.591272880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:31:40.591474 containerd[1527]: time="2025-09-12T17:31:40.591455560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:31:40.591503 containerd[1527]: time="2025-09-12T17:31:40.591475400Z" level=info msg="Start snapshots syncer" Sep 12 17:31:40.591526 containerd[1527]: time="2025-09-12T17:31:40.591510600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:31:40.591944 containerd[1527]: time="2025-09-12T17:31:40.591868440Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:31:40.592038 containerd[1527]: time="2025-09-12T17:31:40.591961600Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:31:40.592059 containerd[1527]: time="2025-09-12T17:31:40.592042880Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:31:40.592225 containerd[1527]: time="2025-09-12T17:31:40.592198160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:31:40.592263 containerd[1527]: time="2025-09-12T17:31:40.592230240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:31:40.592263 containerd[1527]: time="2025-09-12T17:31:40.592244920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:31:40.592263 containerd[1527]: time="2025-09-12T17:31:40.592257360Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:31:40.592313 containerd[1527]: time="2025-09-12T17:31:40.592270960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:31:40.592313 containerd[1527]: time="2025-09-12T17:31:40.592283080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:31:40.592313 containerd[1527]: time="2025-09-12T17:31:40.592294400Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:31:40.592363 containerd[1527]: time="2025-09-12T17:31:40.592317560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:31:40.592363 containerd[1527]: time="2025-09-12T17:31:40.592329520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:31:40.592363 containerd[1527]: time="2025-09-12T17:31:40.592339840Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:31:40.592413 containerd[1527]: time="2025-09-12T17:31:40.592373000Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:31:40.592413 containerd[1527]: time="2025-09-12T17:31:40.592386640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:31:40.592413 containerd[1527]: time="2025-09-12T17:31:40.592394640Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:31:40.592413 containerd[1527]: time="2025-09-12T17:31:40.592403560Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:31:40.592413 containerd[1527]: time="2025-09-12T17:31:40.592410840Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:31:40.592493 containerd[1527]: time="2025-09-12T17:31:40.592420200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:31:40.592493 containerd[1527]: time="2025-09-12T17:31:40.592430320Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:31:40.592526 containerd[1527]: time="2025-09-12T17:31:40.592505640Z" level=info msg="runtime interface created" Sep 12 17:31:40.592526 containerd[1527]: time="2025-09-12T17:31:40.592511080Z" level=info msg="created NRI interface" Sep 12 17:31:40.592526 containerd[1527]: time="2025-09-12T17:31:40.592518760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:31:40.592581 containerd[1527]: time="2025-09-12T17:31:40.592528960Z" level=info msg="Connect containerd service" Sep 12 17:31:40.592604 containerd[1527]: time="2025-09-12T17:31:40.592580000Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:31:40.595632 containerd[1527]: time="2025-09-12T17:31:40.595595160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:31:40.676396 containerd[1527]: time="2025-09-12T17:31:40.676335960Z" level=info msg="Start subscribing containerd event" Sep 12 17:31:40.676493 containerd[1527]: time="2025-09-12T17:31:40.676413080Z" level=info msg="Start recovering state" Sep 12 17:31:40.676513 containerd[1527]: time="2025-09-12T17:31:40.676507320Z" level=info msg="Start event monitor" Sep 12 17:31:40.676531 containerd[1527]: time="2025-09-12T17:31:40.676520280Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:31:40.676531 containerd[1527]: time="2025-09-12T17:31:40.676527840Z" level=info msg="Start streaming server" Sep 12 17:31:40.676616 containerd[1527]: time="2025-09-12T17:31:40.676536360Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:31:40.676616 containerd[1527]: time="2025-09-12T17:31:40.676554480Z" level=info msg="runtime interface starting up..." Sep 12 17:31:40.676616 containerd[1527]: time="2025-09-12T17:31:40.676562960Z" level=info msg="starting plugins..." Sep 12 17:31:40.676616 containerd[1527]: time="2025-09-12T17:31:40.676578320Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:31:40.677182 containerd[1527]: time="2025-09-12T17:31:40.677155160Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:31:40.677224 containerd[1527]: time="2025-09-12T17:31:40.677209600Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:31:40.677279 containerd[1527]: time="2025-09-12T17:31:40.677264800Z" level=info msg="containerd successfully booted in 0.110857s" Sep 12 17:31:40.677388 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:31:40.757702 tar[1517]: linux-arm64/README.md Sep 12 17:31:40.777103 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:31:41.318613 sshd_keygen[1518]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:31:41.338361 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:31:41.340904 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:31:41.358137 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:31:41.358876 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:31:41.361527 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:31:41.382234 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:31:41.385877 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:31:41.387757 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:31:41.388802 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:31:42.140836 systemd-networkd[1434]: eth0: Gained IPv6LL Sep 12 17:31:42.143175 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:31:42.144628 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:31:42.148101 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:31:42.150444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:31:42.167512 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:31:42.184526 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:31:42.184845 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:31:42.186207 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:31:42.189882 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:31:42.735228 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:31:42.736622 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:31:42.738151 systemd[1]: Startup finished in 2.002s (kernel) + 5.059s (initrd) + 4.126s (userspace) = 11.189s. Sep 12 17:31:42.749222 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:31:43.148457 kubelet[1634]: E0912 17:31:43.147994 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:31:43.151297 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:31:43.151435 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:31:43.151785 systemd[1]: kubelet.service: Consumed 754ms CPU time, 258M memory peak. Sep 12 17:31:46.641187 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:31:46.642235 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:50220.service - OpenSSH per-connection server daemon (10.0.0.1:50220). Sep 12 17:31:46.740551 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 50220 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:46.742722 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:46.748709 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:31:46.749563 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:31:46.755091 systemd-logind[1496]: New session 1 of user core. Sep 12 17:31:46.775809 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:31:46.778914 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:31:46.802795 (systemd)[1653]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:31:46.804973 systemd-logind[1496]: New session c1 of user core. Sep 12 17:31:46.917242 systemd[1653]: Queued start job for default target default.target. Sep 12 17:31:46.939797 systemd[1653]: Created slice app.slice - User Application Slice. Sep 12 17:31:46.939825 systemd[1653]: Reached target paths.target - Paths. Sep 12 17:31:46.939862 systemd[1653]: Reached target timers.target - Timers. Sep 12 17:31:46.941087 systemd[1653]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:31:46.950128 systemd[1653]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:31:46.950187 systemd[1653]: Reached target sockets.target - Sockets. Sep 12 17:31:46.950220 systemd[1653]: Reached target basic.target - Basic System. Sep 12 17:31:46.950248 systemd[1653]: Reached target default.target - Main User Target. Sep 12 17:31:46.950274 systemd[1653]: Startup finished in 138ms. Sep 12 17:31:46.950418 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:31:46.951687 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:31:47.019814 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:50234.service - OpenSSH per-connection server daemon (10.0.0.1:50234). Sep 12 17:31:47.078927 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 50234 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:47.080175 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:47.084848 systemd-logind[1496]: New session 2 of user core. Sep 12 17:31:47.099971 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:31:47.152324 sshd[1667]: Connection closed by 10.0.0.1 port 50234 Sep 12 17:31:47.152817 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:47.170246 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:50234.service: Deactivated successfully. Sep 12 17:31:47.173083 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:31:47.174983 systemd-logind[1496]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:31:47.176683 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:50238.service - OpenSSH per-connection server daemon (10.0.0.1:50238). Sep 12 17:31:47.177721 systemd-logind[1496]: Removed session 2. Sep 12 17:31:47.252611 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 50238 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:47.253886 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:47.259097 systemd-logind[1496]: New session 3 of user core. Sep 12 17:31:47.269930 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:31:47.318457 sshd[1676]: Connection closed by 10.0.0.1 port 50238 Sep 12 17:31:47.318889 sshd-session[1673]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:47.329749 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:50238.service: Deactivated successfully. Sep 12 17:31:47.331375 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:31:47.333371 systemd-logind[1496]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:31:47.335196 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:50248.service - OpenSSH per-connection server daemon (10.0.0.1:50248). Sep 12 17:31:47.336442 systemd-logind[1496]: Removed session 3. Sep 12 17:31:47.391279 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 50248 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:47.392744 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:47.397489 systemd-logind[1496]: New session 4 of user core. Sep 12 17:31:47.404961 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:31:47.460745 sshd[1685]: Connection closed by 10.0.0.1 port 50248 Sep 12 17:31:47.461165 sshd-session[1682]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:47.473669 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:50248.service: Deactivated successfully. Sep 12 17:31:47.476030 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:31:47.476834 systemd-logind[1496]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:31:47.482452 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:50250.service - OpenSSH per-connection server daemon (10.0.0.1:50250). Sep 12 17:31:47.483164 systemd-logind[1496]: Removed session 4. Sep 12 17:31:47.535940 sshd[1691]: Accepted publickey for core from 10.0.0.1 port 50250 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:47.540435 sshd-session[1691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:47.545172 systemd-logind[1496]: New session 5 of user core. Sep 12 17:31:47.561971 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:31:47.625719 sudo[1695]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:31:47.625993 sudo[1695]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:31:47.639652 sudo[1695]: pam_unix(sudo:session): session closed for user root Sep 12 17:31:47.642770 sshd[1694]: Connection closed by 10.0.0.1 port 50250 Sep 12 17:31:47.641651 sshd-session[1691]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:47.653020 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:50250.service: Deactivated successfully. Sep 12 17:31:47.657165 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:31:47.658844 systemd-logind[1496]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:31:47.662715 systemd[1]: Started sshd@5-10.0.0.138:22-10.0.0.1:50266.service - OpenSSH per-connection server daemon (10.0.0.1:50266). Sep 12 17:31:47.663832 systemd-logind[1496]: Removed session 5. Sep 12 17:31:47.731576 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 50266 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:47.733369 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:47.740265 systemd-logind[1496]: New session 6 of user core. Sep 12 17:31:47.758932 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:31:47.811264 sudo[1706]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:31:47.811538 sudo[1706]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:31:47.880884 sudo[1706]: pam_unix(sudo:session): session closed for user root Sep 12 17:31:47.885813 sudo[1705]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:31:47.886064 sudo[1705]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:31:47.897351 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:31:47.946956 augenrules[1728]: No rules Sep 12 17:31:47.948392 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:31:47.948618 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:31:47.951129 sudo[1705]: pam_unix(sudo:session): session closed for user root Sep 12 17:31:47.952345 sshd[1704]: Connection closed by 10.0.0.1 port 50266 Sep 12 17:31:47.953663 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Sep 12 17:31:47.970862 systemd[1]: sshd@5-10.0.0.138:22-10.0.0.1:50266.service: Deactivated successfully. Sep 12 17:31:47.972650 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:31:47.973987 systemd-logind[1496]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:31:47.977017 systemd[1]: Started sshd@6-10.0.0.138:22-10.0.0.1:50268.service - OpenSSH per-connection server daemon (10.0.0.1:50268). Sep 12 17:31:47.978090 systemd-logind[1496]: Removed session 6. Sep 12 17:31:48.034025 sshd[1737]: Accepted publickey for core from 10.0.0.1 port 50268 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:31:48.035153 sshd-session[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:31:48.038939 systemd-logind[1496]: New session 7 of user core. Sep 12 17:31:48.054935 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:31:48.107575 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:31:48.107856 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:31:48.387501 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:31:48.406071 (dockerd)[1762]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:31:48.606827 dockerd[1762]: time="2025-09-12T17:31:48.606741512Z" level=info msg="Starting up" Sep 12 17:31:48.607620 dockerd[1762]: time="2025-09-12T17:31:48.607599302Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:31:48.618283 dockerd[1762]: time="2025-09-12T17:31:48.618246741Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:31:48.646081 dockerd[1762]: time="2025-09-12T17:31:48.645982965Z" level=info msg="Loading containers: start." Sep 12 17:31:48.653779 kernel: Initializing XFRM netlink socket Sep 12 17:31:48.844858 systemd-networkd[1434]: docker0: Link UP Sep 12 17:31:48.848549 dockerd[1762]: time="2025-09-12T17:31:48.848503576Z" level=info msg="Loading containers: done." Sep 12 17:31:48.863276 dockerd[1762]: time="2025-09-12T17:31:48.863217353Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:31:48.863419 dockerd[1762]: time="2025-09-12T17:31:48.863302832Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:31:48.863419 dockerd[1762]: time="2025-09-12T17:31:48.863391586Z" level=info msg="Initializing buildkit" Sep 12 17:31:48.886861 dockerd[1762]: time="2025-09-12T17:31:48.886826777Z" level=info msg="Completed buildkit initialization" Sep 12 17:31:48.891493 dockerd[1762]: time="2025-09-12T17:31:48.891456392Z" level=info msg="Daemon has completed initialization" Sep 12 17:31:48.891638 dockerd[1762]: time="2025-09-12T17:31:48.891525503Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:31:48.891704 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:31:49.438668 containerd[1527]: time="2025-09-12T17:31:49.438625042Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Sep 12 17:31:50.057780 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4050311336.mount: Deactivated successfully. Sep 12 17:31:50.946947 containerd[1527]: time="2025-09-12T17:31:50.946873967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:50.948140 containerd[1527]: time="2025-09-12T17:31:50.947884389Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Sep 12 17:31:50.949176 containerd[1527]: time="2025-09-12T17:31:50.949135029Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:50.954262 containerd[1527]: time="2025-09-12T17:31:50.954214558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:50.955316 containerd[1527]: time="2025-09-12T17:31:50.955279854Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.516607596s" Sep 12 17:31:50.955365 containerd[1527]: time="2025-09-12T17:31:50.955325857Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Sep 12 17:31:50.956095 containerd[1527]: time="2025-09-12T17:31:50.956069734Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Sep 12 17:31:51.997187 containerd[1527]: time="2025-09-12T17:31:51.997132183Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:51.998467 containerd[1527]: time="2025-09-12T17:31:51.998430768Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Sep 12 17:31:51.999602 containerd[1527]: time="2025-09-12T17:31:51.999551322Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:52.003787 containerd[1527]: time="2025-09-12T17:31:52.003153989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:52.003960 containerd[1527]: time="2025-09-12T17:31:52.003932708Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.047824935s" Sep 12 17:31:52.004034 containerd[1527]: time="2025-09-12T17:31:52.004019312Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Sep 12 17:31:52.004665 containerd[1527]: time="2025-09-12T17:31:52.004585785Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Sep 12 17:31:53.007700 containerd[1527]: time="2025-09-12T17:31:53.007649331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:53.008708 containerd[1527]: time="2025-09-12T17:31:53.008046277Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Sep 12 17:31:53.009300 containerd[1527]: time="2025-09-12T17:31:53.009248537Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:53.011986 containerd[1527]: time="2025-09-12T17:31:53.011916373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:53.013055 containerd[1527]: time="2025-09-12T17:31:53.013015284Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.008396305s" Sep 12 17:31:53.013055 containerd[1527]: time="2025-09-12T17:31:53.013054492Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Sep 12 17:31:53.013692 containerd[1527]: time="2025-09-12T17:31:53.013668229Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Sep 12 17:31:53.319509 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:31:53.321286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:31:53.475493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:31:53.479452 (kubelet)[2055]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:31:53.533734 kubelet[2055]: E0912 17:31:53.533658 2055 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:31:53.537953 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:31:53.538108 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:31:53.538456 systemd[1]: kubelet.service: Consumed 154ms CPU time, 105.8M memory peak. Sep 12 17:31:54.083629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount241241288.mount: Deactivated successfully. Sep 12 17:31:54.504853 containerd[1527]: time="2025-09-12T17:31:54.504591707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:54.509164 containerd[1527]: time="2025-09-12T17:31:54.509114480Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Sep 12 17:31:54.510074 containerd[1527]: time="2025-09-12T17:31:54.510016350Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:54.514103 containerd[1527]: time="2025-09-12T17:31:54.513932545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:54.514745 containerd[1527]: time="2025-09-12T17:31:54.514689100Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.500986776s" Sep 12 17:31:54.514745 containerd[1527]: time="2025-09-12T17:31:54.514732301Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Sep 12 17:31:54.515407 containerd[1527]: time="2025-09-12T17:31:54.515377378Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 12 17:31:55.100975 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount822371746.mount: Deactivated successfully. Sep 12 17:31:55.875141 containerd[1527]: time="2025-09-12T17:31:55.875085202Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:55.876626 containerd[1527]: time="2025-09-12T17:31:55.876577874Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 12 17:31:55.877421 containerd[1527]: time="2025-09-12T17:31:55.877391785Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:55.880693 containerd[1527]: time="2025-09-12T17:31:55.880647949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:55.882351 containerd[1527]: time="2025-09-12T17:31:55.882319348Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.366912238s" Sep 12 17:31:55.882415 containerd[1527]: time="2025-09-12T17:31:55.882355776Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 12 17:31:55.882974 containerd[1527]: time="2025-09-12T17:31:55.882905693Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:31:56.366115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount765799543.mount: Deactivated successfully. Sep 12 17:31:56.379082 containerd[1527]: time="2025-09-12T17:31:56.378961759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:31:56.381572 containerd[1527]: time="2025-09-12T17:31:56.381510475Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 17:31:56.384497 containerd[1527]: time="2025-09-12T17:31:56.384419072Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:31:56.390747 containerd[1527]: time="2025-09-12T17:31:56.390627420Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:31:56.391462 containerd[1527]: time="2025-09-12T17:31:56.391320769Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 508.341823ms" Sep 12 17:31:56.391462 containerd[1527]: time="2025-09-12T17:31:56.391361434Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:31:56.392238 containerd[1527]: time="2025-09-12T17:31:56.392213227Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Sep 12 17:31:56.930524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1470832112.mount: Deactivated successfully. Sep 12 17:31:58.803328 containerd[1527]: time="2025-09-12T17:31:58.803261989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:58.805482 containerd[1527]: time="2025-09-12T17:31:58.805445948Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Sep 12 17:31:58.807908 containerd[1527]: time="2025-09-12T17:31:58.807853518Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:58.811254 containerd[1527]: time="2025-09-12T17:31:58.811189180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:31:58.813724 containerd[1527]: time="2025-09-12T17:31:58.813612111Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.421366135s" Sep 12 17:31:58.813795 containerd[1527]: time="2025-09-12T17:31:58.813723455Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Sep 12 17:32:03.569370 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:32:03.571552 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:32:03.776522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:32:03.792541 (kubelet)[2214]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:32:03.831064 kubelet[2214]: E0912 17:32:03.830941 2214 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:32:03.833717 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:32:03.833989 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:32:03.834407 systemd[1]: kubelet.service: Consumed 146ms CPU time, 107.3M memory peak. Sep 12 17:32:03.997487 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:32:03.997628 systemd[1]: kubelet.service: Consumed 146ms CPU time, 107.3M memory peak. Sep 12 17:32:03.999605 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:32:04.023653 systemd[1]: Reload requested from client PID 2229 ('systemctl') (unit session-7.scope)... Sep 12 17:32:04.023669 systemd[1]: Reloading... Sep 12 17:32:04.099785 zram_generator::config[2278]: No configuration found. Sep 12 17:32:04.335505 systemd[1]: Reloading finished in 311 ms. Sep 12 17:32:04.432349 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:32:04.433704 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:32:04.436731 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:32:04.437051 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:32:04.437090 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.2M memory peak. Sep 12 17:32:04.438338 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:32:04.438993 kernel: hrtimer: interrupt took 17567706 ns Sep 12 17:32:04.553869 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:32:04.564197 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:32:04.602051 kubelet[2319]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:32:04.602051 kubelet[2319]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:32:04.602051 kubelet[2319]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:32:04.602051 kubelet[2319]: I0912 17:32:04.602031 2319 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:32:05.942748 kubelet[2319]: I0912 17:32:05.942694 2319 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:32:05.942748 kubelet[2319]: I0912 17:32:05.942733 2319 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:32:05.943144 kubelet[2319]: I0912 17:32:05.943032 2319 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:32:05.965179 kubelet[2319]: E0912 17:32:05.965128 2319 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:05.966320 kubelet[2319]: I0912 17:32:05.966290 2319 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:32:05.974018 kubelet[2319]: I0912 17:32:05.973993 2319 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:32:05.977584 kubelet[2319]: I0912 17:32:05.977507 2319 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:32:05.977795 kubelet[2319]: I0912 17:32:05.977741 2319 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:32:05.977965 kubelet[2319]: I0912 17:32:05.977792 2319 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:32:05.978111 kubelet[2319]: I0912 17:32:05.978047 2319 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:32:05.978111 kubelet[2319]: I0912 17:32:05.978056 2319 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:32:05.978298 kubelet[2319]: I0912 17:32:05.978278 2319 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:32:05.980824 kubelet[2319]: I0912 17:32:05.980781 2319 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:32:05.980824 kubelet[2319]: I0912 17:32:05.980809 2319 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:32:05.981317 kubelet[2319]: I0912 17:32:05.981302 2319 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:32:05.981364 kubelet[2319]: I0912 17:32:05.981323 2319 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:32:05.983337 kubelet[2319]: W0912 17:32:05.983259 2319 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Sep 12 17:32:05.983337 kubelet[2319]: E0912 17:32:05.983333 2319 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:05.984298 kubelet[2319]: W0912 17:32:05.984249 2319 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Sep 12 17:32:05.984371 kubelet[2319]: E0912 17:32:05.984300 2319 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:05.986779 kubelet[2319]: I0912 17:32:05.986124 2319 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:32:05.986779 kubelet[2319]: I0912 17:32:05.986764 2319 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:32:05.986928 kubelet[2319]: W0912 17:32:05.986908 2319 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:32:05.987803 kubelet[2319]: I0912 17:32:05.987770 2319 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:32:05.987838 kubelet[2319]: I0912 17:32:05.987807 2319 server.go:1287] "Started kubelet" Sep 12 17:32:05.988837 kubelet[2319]: I0912 17:32:05.988791 2319 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:32:05.988982 kubelet[2319]: I0912 17:32:05.988932 2319 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:32:05.989260 kubelet[2319]: I0912 17:32:05.989221 2319 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:32:05.989671 kubelet[2319]: I0912 17:32:05.989642 2319 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:32:05.992472 kubelet[2319]: I0912 17:32:05.992441 2319 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:32:05.993555 kubelet[2319]: I0912 17:32:05.993520 2319 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:32:05.994624 kubelet[2319]: E0912 17:32:05.993984 2319 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:32:05.994624 kubelet[2319]: I0912 17:32:05.994025 2319 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:32:05.994624 kubelet[2319]: I0912 17:32:05.994207 2319 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:32:05.994624 kubelet[2319]: I0912 17:32:05.994271 2319 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:32:05.994714 kubelet[2319]: W0912 17:32:05.994632 2319 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Sep 12 17:32:05.994714 kubelet[2319]: E0912 17:32:05.994675 2319 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:05.995638 kubelet[2319]: I0912 17:32:05.995572 2319 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:32:05.995710 kubelet[2319]: I0912 17:32:05.995689 2319 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:32:05.995889 kubelet[2319]: E0912 17:32:05.995849 2319 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:32:05.996326 kubelet[2319]: E0912 17:32:05.996298 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="200ms" Sep 12 17:32:05.996516 kubelet[2319]: I0912 17:32:05.996496 2319 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:32:06.000125 kubelet[2319]: E0912 17:32:05.999857 2319 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.138:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.138:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864995096cd0696 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:32:05.987788438 +0000 UTC m=+1.420152164,LastTimestamp:2025-09-12 17:32:05.987788438 +0000 UTC m=+1.420152164,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:32:06.007927 kubelet[2319]: I0912 17:32:06.007896 2319 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:32:06.007927 kubelet[2319]: I0912 17:32:06.007918 2319 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:32:06.007927 kubelet[2319]: I0912 17:32:06.007936 2319 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:32:06.094437 kubelet[2319]: E0912 17:32:06.094388 2319 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:32:06.099027 kubelet[2319]: I0912 17:32:06.098962 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:32:06.100321 kubelet[2319]: I0912 17:32:06.100271 2319 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:32:06.100321 kubelet[2319]: I0912 17:32:06.100301 2319 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:32:06.100321 kubelet[2319]: I0912 17:32:06.100323 2319 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:32:06.100460 kubelet[2319]: I0912 17:32:06.100332 2319 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:32:06.100460 kubelet[2319]: E0912 17:32:06.100375 2319 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:32:06.101021 kubelet[2319]: W0912 17:32:06.100846 2319 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Sep 12 17:32:06.101021 kubelet[2319]: E0912 17:32:06.100883 2319 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:06.107333 kubelet[2319]: I0912 17:32:06.107290 2319 policy_none.go:49] "None policy: Start" Sep 12 17:32:06.107333 kubelet[2319]: I0912 17:32:06.107322 2319 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:32:06.107333 kubelet[2319]: I0912 17:32:06.107335 2319 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:32:06.127697 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:32:06.141606 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:32:06.144686 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:32:06.169990 kubelet[2319]: I0912 17:32:06.169780 2319 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:32:06.170097 kubelet[2319]: I0912 17:32:06.170006 2319 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:32:06.170097 kubelet[2319]: I0912 17:32:06.170020 2319 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:32:06.170268 kubelet[2319]: I0912 17:32:06.170252 2319 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:32:06.171799 kubelet[2319]: E0912 17:32:06.171775 2319 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:32:06.171941 kubelet[2319]: E0912 17:32:06.171926 2319 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:32:06.197573 kubelet[2319]: E0912 17:32:06.197460 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="400ms" Sep 12 17:32:06.210527 systemd[1]: Created slice kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice - libcontainer container kubepods-burstable-pod72a30db4fc25e4da65a3b99eba43be94.slice. Sep 12 17:32:06.229775 kubelet[2319]: E0912 17:32:06.229678 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:06.232438 systemd[1]: Created slice kubepods-burstable-poda16856d93d66536fcf118cf399d7cc25.slice - libcontainer container kubepods-burstable-poda16856d93d66536fcf118cf399d7cc25.slice. Sep 12 17:32:06.235441 kubelet[2319]: E0912 17:32:06.234873 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:06.238271 systemd[1]: Created slice kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice - libcontainer container kubepods-burstable-pod1403266a9792debaa127cd8df7a81c3c.slice. Sep 12 17:32:06.240361 kubelet[2319]: E0912 17:32:06.240327 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:06.271717 kubelet[2319]: I0912 17:32:06.271514 2319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:32:06.271988 kubelet[2319]: E0912 17:32:06.271961 2319 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Sep 12 17:32:06.395961 kubelet[2319]: I0912 17:32:06.395920 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a16856d93d66536fcf118cf399d7cc25-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a16856d93d66536fcf118cf399d7cc25\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:06.395961 kubelet[2319]: I0912 17:32:06.395961 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:06.396094 kubelet[2319]: I0912 17:32:06.395981 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:06.396094 kubelet[2319]: I0912 17:32:06.396012 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:32:06.396094 kubelet[2319]: I0912 17:32:06.396027 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a16856d93d66536fcf118cf399d7cc25-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a16856d93d66536fcf118cf399d7cc25\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:06.396094 kubelet[2319]: I0912 17:32:06.396042 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a16856d93d66536fcf118cf399d7cc25-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a16856d93d66536fcf118cf399d7cc25\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:06.396094 kubelet[2319]: I0912 17:32:06.396057 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:06.396191 kubelet[2319]: I0912 17:32:06.396071 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:06.396191 kubelet[2319]: I0912 17:32:06.396099 2319 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:06.473811 kubelet[2319]: I0912 17:32:06.473648 2319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:32:06.474070 kubelet[2319]: E0912 17:32:06.474030 2319 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Sep 12 17:32:06.530490 kubelet[2319]: E0912 17:32:06.530442 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:06.531373 containerd[1527]: time="2025-09-12T17:32:06.531080400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:06.535826 kubelet[2319]: E0912 17:32:06.535791 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:06.536287 containerd[1527]: time="2025-09-12T17:32:06.536240687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a16856d93d66536fcf118cf399d7cc25,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:06.541640 kubelet[2319]: E0912 17:32:06.541585 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:06.541935 containerd[1527]: time="2025-09-12T17:32:06.541891314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:06.598009 kubelet[2319]: E0912 17:32:06.597927 2319 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="800ms" Sep 12 17:32:06.624981 containerd[1527]: time="2025-09-12T17:32:06.624515041Z" level=info msg="connecting to shim 2ba232bad4b89155a39a8beb690cfc7a61ed5a74ff3a22b694ca3d59d6ed2999" address="unix:///run/containerd/s/71d857e68bcfa4feaacca7e6bf061c36dfe892e9e0cbb2b8d7dd578c92647514" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:32:06.630696 containerd[1527]: time="2025-09-12T17:32:06.630649603Z" level=info msg="connecting to shim 3544f6cbc51b7ad72ca49c2367a95d3d44310a46690b67cb684fb7086ddc0d0a" address="unix:///run/containerd/s/a82554de266e8eabbe0ee78d044e5a1cba91d28ab12138e8570151090699bf99" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:32:06.631233 containerd[1527]: time="2025-09-12T17:32:06.630855717Z" level=info msg="connecting to shim d3cae7d769a38db85d10761385a737be46c1081030f73ec9e9df0597a6e5d19b" address="unix:///run/containerd/s/4f3c1f19e3dbeea35fef23819c979aef66052dc3f983c3b2268e8492dce82ebf" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:32:06.658910 systemd[1]: Started cri-containerd-d3cae7d769a38db85d10761385a737be46c1081030f73ec9e9df0597a6e5d19b.scope - libcontainer container d3cae7d769a38db85d10761385a737be46c1081030f73ec9e9df0597a6e5d19b. Sep 12 17:32:06.662965 systemd[1]: Started cri-containerd-2ba232bad4b89155a39a8beb690cfc7a61ed5a74ff3a22b694ca3d59d6ed2999.scope - libcontainer container 2ba232bad4b89155a39a8beb690cfc7a61ed5a74ff3a22b694ca3d59d6ed2999. Sep 12 17:32:06.664331 systemd[1]: Started cri-containerd-3544f6cbc51b7ad72ca49c2367a95d3d44310a46690b67cb684fb7086ddc0d0a.scope - libcontainer container 3544f6cbc51b7ad72ca49c2367a95d3d44310a46690b67cb684fb7086ddc0d0a. Sep 12 17:32:06.701069 containerd[1527]: time="2025-09-12T17:32:06.700881891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a16856d93d66536fcf118cf399d7cc25,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ba232bad4b89155a39a8beb690cfc7a61ed5a74ff3a22b694ca3d59d6ed2999\"" Sep 12 17:32:06.702730 kubelet[2319]: E0912 17:32:06.702687 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:06.704555 containerd[1527]: time="2025-09-12T17:32:06.704509098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72a30db4fc25e4da65a3b99eba43be94,Namespace:kube-system,Attempt:0,} returns sandbox id \"d3cae7d769a38db85d10761385a737be46c1081030f73ec9e9df0597a6e5d19b\"" Sep 12 17:32:06.704705 containerd[1527]: time="2025-09-12T17:32:06.704672251Z" level=info msg="CreateContainer within sandbox \"2ba232bad4b89155a39a8beb690cfc7a61ed5a74ff3a22b694ca3d59d6ed2999\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:32:06.705255 kubelet[2319]: E0912 17:32:06.705234 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:06.706904 containerd[1527]: time="2025-09-12T17:32:06.706877682Z" level=info msg="CreateContainer within sandbox \"d3cae7d769a38db85d10761385a737be46c1081030f73ec9e9df0597a6e5d19b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:32:06.715890 containerd[1527]: time="2025-09-12T17:32:06.715851752Z" level=info msg="Container 1b64ce823e5bef5bb6f2bbd7debe69993d63941f07ed9c8329dedf816fec2fdc: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:32:06.719098 containerd[1527]: time="2025-09-12T17:32:06.719040467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:1403266a9792debaa127cd8df7a81c3c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3544f6cbc51b7ad72ca49c2367a95d3d44310a46690b67cb684fb7086ddc0d0a\"" Sep 12 17:32:06.720066 kubelet[2319]: E0912 17:32:06.719852 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:06.721506 containerd[1527]: time="2025-09-12T17:32:06.721469068Z" level=info msg="CreateContainer within sandbox \"3544f6cbc51b7ad72ca49c2367a95d3d44310a46690b67cb684fb7086ddc0d0a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:32:06.724166 containerd[1527]: time="2025-09-12T17:32:06.724078479Z" level=info msg="Container fb91a3221b9465102019b2a57ba38878ca9d98f74cccbd0a2e66abe5bbfb052d: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:32:06.728205 containerd[1527]: time="2025-09-12T17:32:06.728164236Z" level=info msg="CreateContainer within sandbox \"d3cae7d769a38db85d10761385a737be46c1081030f73ec9e9df0597a6e5d19b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1b64ce823e5bef5bb6f2bbd7debe69993d63941f07ed9c8329dedf816fec2fdc\"" Sep 12 17:32:06.728775 containerd[1527]: time="2025-09-12T17:32:06.728724363Z" level=info msg="StartContainer for \"1b64ce823e5bef5bb6f2bbd7debe69993d63941f07ed9c8329dedf816fec2fdc\"" Sep 12 17:32:06.729857 containerd[1527]: time="2025-09-12T17:32:06.729786921Z" level=info msg="connecting to shim 1b64ce823e5bef5bb6f2bbd7debe69993d63941f07ed9c8329dedf816fec2fdc" address="unix:///run/containerd/s/4f3c1f19e3dbeea35fef23819c979aef66052dc3f983c3b2268e8492dce82ebf" protocol=ttrpc version=3 Sep 12 17:32:06.732274 containerd[1527]: time="2025-09-12T17:32:06.732114227Z" level=info msg="CreateContainer within sandbox \"2ba232bad4b89155a39a8beb690cfc7a61ed5a74ff3a22b694ca3d59d6ed2999\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"fb91a3221b9465102019b2a57ba38878ca9d98f74cccbd0a2e66abe5bbfb052d\"" Sep 12 17:32:06.732998 containerd[1527]: time="2025-09-12T17:32:06.732977157Z" level=info msg="StartContainer for \"fb91a3221b9465102019b2a57ba38878ca9d98f74cccbd0a2e66abe5bbfb052d\"" Sep 12 17:32:06.734927 containerd[1527]: time="2025-09-12T17:32:06.734889513Z" level=info msg="connecting to shim fb91a3221b9465102019b2a57ba38878ca9d98f74cccbd0a2e66abe5bbfb052d" address="unix:///run/containerd/s/71d857e68bcfa4feaacca7e6bf061c36dfe892e9e0cbb2b8d7dd578c92647514" protocol=ttrpc version=3 Sep 12 17:32:06.735531 containerd[1527]: time="2025-09-12T17:32:06.735502089Z" level=info msg="Container 32817acea6afd6bab1563cb895ebb1fb8cdb6265e07a7adbb6c87e9123ec8e4a: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:32:06.743556 containerd[1527]: time="2025-09-12T17:32:06.743520781Z" level=info msg="CreateContainer within sandbox \"3544f6cbc51b7ad72ca49c2367a95d3d44310a46690b67cb684fb7086ddc0d0a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"32817acea6afd6bab1563cb895ebb1fb8cdb6265e07a7adbb6c87e9123ec8e4a\"" Sep 12 17:32:06.745151 containerd[1527]: time="2025-09-12T17:32:06.744040509Z" level=info msg="StartContainer for \"32817acea6afd6bab1563cb895ebb1fb8cdb6265e07a7adbb6c87e9123ec8e4a\"" Sep 12 17:32:06.745151 containerd[1527]: time="2025-09-12T17:32:06.745056183Z" level=info msg="connecting to shim 32817acea6afd6bab1563cb895ebb1fb8cdb6265e07a7adbb6c87e9123ec8e4a" address="unix:///run/containerd/s/a82554de266e8eabbe0ee78d044e5a1cba91d28ab12138e8570151090699bf99" protocol=ttrpc version=3 Sep 12 17:32:06.752897 systemd[1]: Started cri-containerd-fb91a3221b9465102019b2a57ba38878ca9d98f74cccbd0a2e66abe5bbfb052d.scope - libcontainer container fb91a3221b9465102019b2a57ba38878ca9d98f74cccbd0a2e66abe5bbfb052d. Sep 12 17:32:06.755586 systemd[1]: Started cri-containerd-1b64ce823e5bef5bb6f2bbd7debe69993d63941f07ed9c8329dedf816fec2fdc.scope - libcontainer container 1b64ce823e5bef5bb6f2bbd7debe69993d63941f07ed9c8329dedf816fec2fdc. Sep 12 17:32:06.771924 systemd[1]: Started cri-containerd-32817acea6afd6bab1563cb895ebb1fb8cdb6265e07a7adbb6c87e9123ec8e4a.scope - libcontainer container 32817acea6afd6bab1563cb895ebb1fb8cdb6265e07a7adbb6c87e9123ec8e4a. Sep 12 17:32:06.811578 containerd[1527]: time="2025-09-12T17:32:06.811536747Z" level=info msg="StartContainer for \"1b64ce823e5bef5bb6f2bbd7debe69993d63941f07ed9c8329dedf816fec2fdc\" returns successfully" Sep 12 17:32:06.813182 containerd[1527]: time="2025-09-12T17:32:06.813152064Z" level=info msg="StartContainer for \"fb91a3221b9465102019b2a57ba38878ca9d98f74cccbd0a2e66abe5bbfb052d\" returns successfully" Sep 12 17:32:06.828486 containerd[1527]: time="2025-09-12T17:32:06.828390377Z" level=info msg="StartContainer for \"32817acea6afd6bab1563cb895ebb1fb8cdb6265e07a7adbb6c87e9123ec8e4a\" returns successfully" Sep 12 17:32:06.837610 kubelet[2319]: W0912 17:32:06.837536 2319 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Sep 12 17:32:06.837610 kubelet[2319]: E0912 17:32:06.837613 2319 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.138:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.138:6443: connect: connection refused" logger="UnhandledError" Sep 12 17:32:06.876389 kubelet[2319]: I0912 17:32:06.876359 2319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:32:06.877276 kubelet[2319]: E0912 17:32:06.877246 2319 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Sep 12 17:32:07.108987 kubelet[2319]: E0912 17:32:07.108959 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:07.109520 kubelet[2319]: E0912 17:32:07.109084 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:07.111224 kubelet[2319]: E0912 17:32:07.111027 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:07.111224 kubelet[2319]: E0912 17:32:07.111175 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:07.112246 kubelet[2319]: E0912 17:32:07.112223 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:07.112356 kubelet[2319]: E0912 17:32:07.112326 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:07.678945 kubelet[2319]: I0912 17:32:07.678911 2319 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:32:08.116234 kubelet[2319]: E0912 17:32:08.116190 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:08.116561 kubelet[2319]: E0912 17:32:08.116326 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:08.118071 kubelet[2319]: E0912 17:32:08.118046 2319 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:32:08.118770 kubelet[2319]: E0912 17:32:08.118211 2319 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:08.465947 kubelet[2319]: E0912 17:32:08.464943 2319 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 17:32:08.508116 kubelet[2319]: I0912 17:32:08.508077 2319 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:32:08.595593 kubelet[2319]: I0912 17:32:08.595547 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:32:08.602782 kubelet[2319]: E0912 17:32:08.602707 2319 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 12 17:32:08.602782 kubelet[2319]: I0912 17:32:08.602736 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:08.604790 kubelet[2319]: E0912 17:32:08.604544 2319 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:08.604790 kubelet[2319]: I0912 17:32:08.604572 2319 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:08.606076 kubelet[2319]: E0912 17:32:08.606050 2319 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:08.985001 kubelet[2319]: I0912 17:32:08.984951 2319 apiserver.go:52] "Watching apiserver" Sep 12 17:32:08.994348 kubelet[2319]: I0912 17:32:08.994300 2319 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:32:10.510121 systemd[1]: Reload requested from client PID 2601 ('systemctl') (unit session-7.scope)... Sep 12 17:32:10.510141 systemd[1]: Reloading... Sep 12 17:32:10.602783 zram_generator::config[2645]: No configuration found. Sep 12 17:32:10.774022 systemd[1]: Reloading finished in 263 ms. Sep 12 17:32:10.806664 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:32:10.820282 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:32:10.820510 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:32:10.820563 systemd[1]: kubelet.service: Consumed 1.809s CPU time, 129.5M memory peak. Sep 12 17:32:10.823691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:32:10.986369 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:32:10.990605 (kubelet)[2686]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:32:11.031281 kubelet[2686]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:32:11.031281 kubelet[2686]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:32:11.031281 kubelet[2686]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:32:11.031281 kubelet[2686]: I0912 17:32:11.031257 2686 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:32:11.038450 kubelet[2686]: I0912 17:32:11.038404 2686 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Sep 12 17:32:11.038450 kubelet[2686]: I0912 17:32:11.038435 2686 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:32:11.038962 kubelet[2686]: I0912 17:32:11.038656 2686 server.go:954] "Client rotation is on, will bootstrap in background" Sep 12 17:32:11.039909 kubelet[2686]: I0912 17:32:11.039885 2686 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 12 17:32:11.043115 kubelet[2686]: I0912 17:32:11.043055 2686 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:32:11.049234 kubelet[2686]: I0912 17:32:11.049130 2686 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:32:11.054364 kubelet[2686]: I0912 17:32:11.054335 2686 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:32:11.054566 kubelet[2686]: I0912 17:32:11.054538 2686 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:32:11.054723 kubelet[2686]: I0912 17:32:11.054567 2686 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:32:11.054862 kubelet[2686]: I0912 17:32:11.054732 2686 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:32:11.054862 kubelet[2686]: I0912 17:32:11.054740 2686 container_manager_linux.go:304] "Creating device plugin manager" Sep 12 17:32:11.054862 kubelet[2686]: I0912 17:32:11.054810 2686 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:32:11.054971 kubelet[2686]: I0912 17:32:11.054936 2686 kubelet.go:446] "Attempting to sync node with API server" Sep 12 17:32:11.054971 kubelet[2686]: I0912 17:32:11.054948 2686 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:32:11.054971 kubelet[2686]: I0912 17:32:11.054969 2686 kubelet.go:352] "Adding apiserver pod source" Sep 12 17:32:11.055215 kubelet[2686]: I0912 17:32:11.054979 2686 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:32:11.059112 kubelet[2686]: I0912 17:32:11.059055 2686 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:32:11.060875 kubelet[2686]: I0912 17:32:11.060830 2686 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 12 17:32:11.061370 kubelet[2686]: I0912 17:32:11.061348 2686 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:32:11.061429 kubelet[2686]: I0912 17:32:11.061382 2686 server.go:1287] "Started kubelet" Sep 12 17:32:11.061866 kubelet[2686]: I0912 17:32:11.061820 2686 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:32:11.062000 kubelet[2686]: I0912 17:32:11.061953 2686 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:32:11.062226 kubelet[2686]: I0912 17:32:11.062205 2686 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:32:11.064307 kubelet[2686]: I0912 17:32:11.064277 2686 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:32:11.064434 kubelet[2686]: I0912 17:32:11.064399 2686 server.go:479] "Adding debug handlers to kubelet server" Sep 12 17:32:11.071941 kubelet[2686]: I0912 17:32:11.071910 2686 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:32:11.073404 kubelet[2686]: E0912 17:32:11.073381 2686 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:32:11.079964 kubelet[2686]: I0912 17:32:11.079936 2686 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:32:11.080161 kubelet[2686]: I0912 17:32:11.080150 2686 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:32:11.081814 kubelet[2686]: I0912 17:32:11.081712 2686 factory.go:221] Registration of the systemd container factory successfully Sep 12 17:32:11.081936 kubelet[2686]: E0912 17:32:11.081910 2686 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:32:11.082045 kubelet[2686]: I0912 17:32:11.082025 2686 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:32:11.082214 kubelet[2686]: I0912 17:32:11.082041 2686 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:32:11.084920 kubelet[2686]: I0912 17:32:11.084892 2686 factory.go:221] Registration of the containerd container factory successfully Sep 12 17:32:11.103653 kubelet[2686]: I0912 17:32:11.103506 2686 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 12 17:32:11.109083 kubelet[2686]: I0912 17:32:11.109042 2686 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 12 17:32:11.109083 kubelet[2686]: I0912 17:32:11.109079 2686 status_manager.go:227] "Starting to sync pod status with apiserver" Sep 12 17:32:11.109219 kubelet[2686]: I0912 17:32:11.109100 2686 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:32:11.109219 kubelet[2686]: I0912 17:32:11.109108 2686 kubelet.go:2382] "Starting kubelet main sync loop" Sep 12 17:32:11.109219 kubelet[2686]: E0912 17:32:11.109158 2686 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:32:11.139094 kubelet[2686]: I0912 17:32:11.139069 2686 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:32:11.139094 kubelet[2686]: I0912 17:32:11.139086 2686 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:32:11.139094 kubelet[2686]: I0912 17:32:11.139106 2686 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:32:11.139252 kubelet[2686]: I0912 17:32:11.139242 2686 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:32:11.139277 kubelet[2686]: I0912 17:32:11.139252 2686 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:32:11.139277 kubelet[2686]: I0912 17:32:11.139269 2686 policy_none.go:49] "None policy: Start" Sep 12 17:32:11.139277 kubelet[2686]: I0912 17:32:11.139277 2686 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:32:11.139350 kubelet[2686]: I0912 17:32:11.139286 2686 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:32:11.139387 kubelet[2686]: I0912 17:32:11.139373 2686 state_mem.go:75] "Updated machine memory state" Sep 12 17:32:11.142867 kubelet[2686]: I0912 17:32:11.142847 2686 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 12 17:32:11.143027 kubelet[2686]: I0912 17:32:11.143007 2686 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:32:11.143075 kubelet[2686]: I0912 17:32:11.143022 2686 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:32:11.143483 kubelet[2686]: I0912 17:32:11.143369 2686 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:32:11.144318 kubelet[2686]: E0912 17:32:11.144297 2686 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:32:11.210340 kubelet[2686]: I0912 17:32:11.210304 2686 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:11.210442 kubelet[2686]: I0912 17:32:11.210349 2686 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:32:11.210595 kubelet[2686]: I0912 17:32:11.210500 2686 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:11.247427 kubelet[2686]: I0912 17:32:11.247392 2686 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:32:11.255995 kubelet[2686]: I0912 17:32:11.255954 2686 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 17:32:11.256122 kubelet[2686]: I0912 17:32:11.256045 2686 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:32:11.283543 kubelet[2686]: I0912 17:32:11.283440 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72a30db4fc25e4da65a3b99eba43be94-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72a30db4fc25e4da65a3b99eba43be94\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:32:11.283835 kubelet[2686]: I0912 17:32:11.283702 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a16856d93d66536fcf118cf399d7cc25-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a16856d93d66536fcf118cf399d7cc25\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:11.283835 kubelet[2686]: I0912 17:32:11.283724 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a16856d93d66536fcf118cf399d7cc25-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a16856d93d66536fcf118cf399d7cc25\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:11.283835 kubelet[2686]: I0912 17:32:11.283743 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a16856d93d66536fcf118cf399d7cc25-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a16856d93d66536fcf118cf399d7cc25\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:11.283835 kubelet[2686]: I0912 17:32:11.283797 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:11.283835 kubelet[2686]: I0912 17:32:11.283812 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:11.284082 kubelet[2686]: I0912 17:32:11.283826 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:11.284082 kubelet[2686]: I0912 17:32:11.284026 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:11.284082 kubelet[2686]: I0912 17:32:11.284047 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1403266a9792debaa127cd8df7a81c3c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"1403266a9792debaa127cd8df7a81c3c\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:32:11.492446 sudo[2726]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:32:11.493056 sudo[2726]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:32:11.520339 kubelet[2686]: E0912 17:32:11.520057 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:11.520605 kubelet[2686]: E0912 17:32:11.520567 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:11.520653 kubelet[2686]: E0912 17:32:11.520594 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:11.821363 sudo[2726]: pam_unix(sudo:session): session closed for user root Sep 12 17:32:12.056511 kubelet[2686]: I0912 17:32:12.056470 2686 apiserver.go:52] "Watching apiserver" Sep 12 17:32:12.082092 kubelet[2686]: I0912 17:32:12.081979 2686 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:32:12.127743 kubelet[2686]: E0912 17:32:12.127701 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:12.128696 kubelet[2686]: I0912 17:32:12.128036 2686 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:32:12.128696 kubelet[2686]: I0912 17:32:12.128340 2686 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:12.137773 kubelet[2686]: E0912 17:32:12.137727 2686 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:32:12.138111 kubelet[2686]: E0912 17:32:12.137884 2686 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 17:32:12.138111 kubelet[2686]: E0912 17:32:12.137897 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:12.138111 kubelet[2686]: E0912 17:32:12.138026 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:12.167294 kubelet[2686]: I0912 17:32:12.166966 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.166947171 podStartE2EDuration="1.166947171s" podCreationTimestamp="2025-09-12 17:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:32:12.152519926 +0000 UTC m=+1.158491401" watchObservedRunningTime="2025-09-12 17:32:12.166947171 +0000 UTC m=+1.172918606" Sep 12 17:32:12.167294 kubelet[2686]: I0912 17:32:12.167204 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.167197036 podStartE2EDuration="1.167197036s" podCreationTimestamp="2025-09-12 17:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:32:12.166817156 +0000 UTC m=+1.172788631" watchObservedRunningTime="2025-09-12 17:32:12.167197036 +0000 UTC m=+1.173168511" Sep 12 17:32:12.184812 kubelet[2686]: I0912 17:32:12.184729 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.184716305 podStartE2EDuration="1.184716305s" podCreationTimestamp="2025-09-12 17:32:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:32:12.175565005 +0000 UTC m=+1.181536480" watchObservedRunningTime="2025-09-12 17:32:12.184716305 +0000 UTC m=+1.190687780" Sep 12 17:32:13.129978 kubelet[2686]: E0912 17:32:13.129937 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:13.130372 kubelet[2686]: E0912 17:32:13.130014 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:13.564113 sudo[1741]: pam_unix(sudo:session): session closed for user root Sep 12 17:32:13.565362 sshd[1740]: Connection closed by 10.0.0.1 port 50268 Sep 12 17:32:13.565903 sshd-session[1737]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:13.569205 systemd-logind[1496]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:32:13.569326 systemd[1]: sshd@6-10.0.0.138:22-10.0.0.1:50268.service: Deactivated successfully. Sep 12 17:32:13.571287 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:32:13.571475 systemd[1]: session-7.scope: Consumed 7.171s CPU time, 260.1M memory peak. Sep 12 17:32:13.573871 systemd-logind[1496]: Removed session 7. Sep 12 17:32:14.592160 kubelet[2686]: E0912 17:32:14.592127 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:17.409816 kubelet[2686]: I0912 17:32:17.409781 2686 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:32:17.410205 containerd[1527]: time="2025-09-12T17:32:17.410154333Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:32:17.410399 kubelet[2686]: I0912 17:32:17.410326 2686 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:32:17.485351 kubelet[2686]: E0912 17:32:17.485311 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:18.136919 kubelet[2686]: E0912 17:32:18.136887 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:18.337358 systemd[1]: Created slice kubepods-besteffort-podadb4dc84_127e_40e0_9c2d_a303e3371120.slice - libcontainer container kubepods-besteffort-podadb4dc84_127e_40e0_9c2d_a303e3371120.slice. Sep 12 17:32:18.352863 systemd[1]: Created slice kubepods-burstable-podba139a6d_0a33_4a3d_a8da_792686278fe8.slice - libcontainer container kubepods-burstable-podba139a6d_0a33_4a3d_a8da_792686278fe8.slice. Sep 12 17:32:18.427147 systemd[1]: Created slice kubepods-besteffort-pod870ac407_5d29_43d5_8640_e6a08b763722.slice - libcontainer container kubepods-besteffort-pod870ac407_5d29_43d5_8640_e6a08b763722.slice. Sep 12 17:32:18.427322 kubelet[2686]: I0912 17:32:18.427255 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/adb4dc84-127e-40e0-9c2d-a303e3371120-lib-modules\") pod \"kube-proxy-5lbjt\" (UID: \"adb4dc84-127e-40e0-9c2d-a303e3371120\") " pod="kube-system/kube-proxy-5lbjt" Sep 12 17:32:18.427322 kubelet[2686]: I0912 17:32:18.427290 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-cni-path\") pod \"cilium-4k9nm\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " pod="kube-system/cilium-4k9nm" Sep 12 17:32:18.427551 kubelet[2686]: I0912 17:32:18.427404 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-lib-modules\") pod \"cilium-4k9nm\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " pod="kube-system/cilium-4k9nm" Sep 12 17:32:18.427551 kubelet[2686]: I0912 17:32:18.427443 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ba139a6d-0a33-4a3d-a8da-792686278fe8-clustermesh-secrets\") pod \"cilium-4k9nm\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " pod="kube-system/cilium-4k9nm" Sep 12 17:32:18.427551 kubelet[2686]: I0912 17:32:18.427460 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-host-proc-sys-kernel\") pod \"cilium-4k9nm\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " pod="kube-system/cilium-4k9nm" Sep 12 17:32:18.427551 kubelet[2686]: I0912 17:32:18.427483 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/adb4dc84-127e-40e0-9c2d-a303e3371120-kube-proxy\") pod \"kube-proxy-5lbjt\" (UID: \"adb4dc84-127e-40e0-9c2d-a303e3371120\") " pod="kube-system/kube-proxy-5lbjt" Sep 12 17:32:18.427551 kubelet[2686]: I0912 17:32:18.427503 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-etc-cni-netd\") pod \"cilium-4k9nm\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " pod="kube-system/cilium-4k9nm" Sep 12 17:32:18.427551 kubelet[2686]: I0912 17:32:18.427521 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-cilium-run\") pod \"cilium-4k9nm\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " pod="kube-system/cilium-4k9nm" Sep 12 17:32:18.427687 kubelet[2686]: I0912 17:32:18.427537 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/adb4dc84-127e-40e0-9c2d-a303e3371120-xtables-lock\") pod \"kube-proxy-5lbjt\" (UID: \"adb4dc84-127e-40e0-9c2d-a303e3371120\") " pod="kube-system/kube-proxy-5lbjt" Sep 12 17:32:18.427687 kubelet[2686]: I0912 17:32:18.427557 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqr58\" (UniqueName: \"kubernetes.io/projected/adb4dc84-127e-40e0-9c2d-a303e3371120-kube-api-access-zqr58\") pod \"kube-proxy-5lbjt\" (UID: \"adb4dc84-127e-40e0-9c2d-a303e3371120\") " pod="kube-system/kube-proxy-5lbjt" Sep 12 17:32:18.427687 kubelet[2686]: I0912 17:32:18.427576 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-bpf-maps\") pod \"cilium-4k9nm\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " pod="kube-system/cilium-4k9nm" Sep 12 17:32:18.427687 kubelet[2686]: I0912 17:32:18.427595 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-cilium-cgroup\") pod \"cilium-4k9nm\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " pod="kube-system/cilium-4k9nm" Sep 12 17:32:18.427687 kubelet[2686]: I0912 17:32:18.427611 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nb9k\" (UniqueName: \"kubernetes.io/projected/870ac407-5d29-43d5-8640-e6a08b763722-kube-api-access-4nb9k\") pod \"cilium-operator-6c4d7847fc-hjpx7\" (UID: \"870ac407-5d29-43d5-8640-e6a08b763722\") " pod="kube-system/cilium-operator-6c4d7847fc-hjpx7" Sep 12 17:32:18.427813 kubelet[2686]: I0912 17:32:18.427632 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-hostproc\") pod \"cilium-4k9nm\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " pod="kube-system/cilium-4k9nm" Sep 12 17:32:18.427813 kubelet[2686]: I0912 17:32:18.427663 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-xtables-lock\") pod \"cilium-4k9nm\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " pod="kube-system/cilium-4k9nm" Sep 12 17:32:18.427813 kubelet[2686]: I0912 17:32:18.427686 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ba139a6d-0a33-4a3d-a8da-792686278fe8-hubble-tls\") pod \"cilium-4k9nm\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " pod="kube-system/cilium-4k9nm" Sep 12 17:32:18.427813 kubelet[2686]: I0912 17:32:18.427712 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tkdv\" (UniqueName: \"kubernetes.io/projected/ba139a6d-0a33-4a3d-a8da-792686278fe8-kube-api-access-5tkdv\") pod \"cilium-4k9nm\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " pod="kube-system/cilium-4k9nm" Sep 12 17:32:18.427813 kubelet[2686]: I0912 17:32:18.427732 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba139a6d-0a33-4a3d-a8da-792686278fe8-cilium-config-path\") pod \"cilium-4k9nm\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " pod="kube-system/cilium-4k9nm" Sep 12 17:32:18.429335 kubelet[2686]: I0912 17:32:18.429229 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-host-proc-sys-net\") pod \"cilium-4k9nm\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " pod="kube-system/cilium-4k9nm" Sep 12 17:32:18.429335 kubelet[2686]: I0912 17:32:18.429267 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/870ac407-5d29-43d5-8640-e6a08b763722-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-hjpx7\" (UID: \"870ac407-5d29-43d5-8640-e6a08b763722\") " pod="kube-system/cilium-operator-6c4d7847fc-hjpx7" Sep 12 17:32:18.651726 kubelet[2686]: E0912 17:32:18.651677 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:18.652235 containerd[1527]: time="2025-09-12T17:32:18.652202142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5lbjt,Uid:adb4dc84-127e-40e0-9c2d-a303e3371120,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:18.656576 kubelet[2686]: E0912 17:32:18.656534 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:18.656925 containerd[1527]: time="2025-09-12T17:32:18.656880678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4k9nm,Uid:ba139a6d-0a33-4a3d-a8da-792686278fe8,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:18.674855 containerd[1527]: time="2025-09-12T17:32:18.674808088Z" level=info msg="connecting to shim b79e134033e1a58b740ef9b5d1e26319db42ff1d17130968e205e47e94902856" address="unix:///run/containerd/s/6cc1d1a97e5de7a5808818ffe08e2e8a9edbe957dcc715e9f3ebeef04385adc8" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:32:18.677210 containerd[1527]: time="2025-09-12T17:32:18.677176926Z" level=info msg="connecting to shim dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550" address="unix:///run/containerd/s/6b810c1a725873c7d4d47b4d555b2f0116f4d5f1059922e599c5ef7378f86fd9" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:32:18.701949 systemd[1]: Started cri-containerd-b79e134033e1a58b740ef9b5d1e26319db42ff1d17130968e205e47e94902856.scope - libcontainer container b79e134033e1a58b740ef9b5d1e26319db42ff1d17130968e205e47e94902856. Sep 12 17:32:18.705137 systemd[1]: Started cri-containerd-dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550.scope - libcontainer container dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550. Sep 12 17:32:18.728407 containerd[1527]: time="2025-09-12T17:32:18.728369010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5lbjt,Uid:adb4dc84-127e-40e0-9c2d-a303e3371120,Namespace:kube-system,Attempt:0,} returns sandbox id \"b79e134033e1a58b740ef9b5d1e26319db42ff1d17130968e205e47e94902856\"" Sep 12 17:32:18.729294 kubelet[2686]: E0912 17:32:18.729258 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:18.732754 kubelet[2686]: E0912 17:32:18.732714 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:18.733795 containerd[1527]: time="2025-09-12T17:32:18.733647978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hjpx7,Uid:870ac407-5d29-43d5-8640-e6a08b763722,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:18.736856 containerd[1527]: time="2025-09-12T17:32:18.736826273Z" level=info msg="CreateContainer within sandbox \"b79e134033e1a58b740ef9b5d1e26319db42ff1d17130968e205e47e94902856\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:32:18.738012 containerd[1527]: time="2025-09-12T17:32:18.737956755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4k9nm,Uid:ba139a6d-0a33-4a3d-a8da-792686278fe8,Namespace:kube-system,Attempt:0,} returns sandbox id \"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\"" Sep 12 17:32:18.738915 kubelet[2686]: E0912 17:32:18.738896 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:18.740626 containerd[1527]: time="2025-09-12T17:32:18.740557346Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:32:18.749936 containerd[1527]: time="2025-09-12T17:32:18.749743563Z" level=info msg="Container 62d4efef61e37500c07e14b66a78317b21cbb6f2a657d89bacaafd5ad55e00df: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:32:18.756052 containerd[1527]: time="2025-09-12T17:32:18.756016408Z" level=info msg="CreateContainer within sandbox \"b79e134033e1a58b740ef9b5d1e26319db42ff1d17130968e205e47e94902856\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"62d4efef61e37500c07e14b66a78317b21cbb6f2a657d89bacaafd5ad55e00df\"" Sep 12 17:32:18.757188 containerd[1527]: time="2025-09-12T17:32:18.757155612Z" level=info msg="connecting to shim 74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652" address="unix:///run/containerd/s/caf761bcd31e3b4a11727bcd69acef75c1953d2185e8f1caf02d1bd0c12169a7" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:32:18.757870 containerd[1527]: time="2025-09-12T17:32:18.757231276Z" level=info msg="StartContainer for \"62d4efef61e37500c07e14b66a78317b21cbb6f2a657d89bacaafd5ad55e00df\"" Sep 12 17:32:18.762342 containerd[1527]: time="2025-09-12T17:32:18.762152369Z" level=info msg="connecting to shim 62d4efef61e37500c07e14b66a78317b21cbb6f2a657d89bacaafd5ad55e00df" address="unix:///run/containerd/s/6cc1d1a97e5de7a5808818ffe08e2e8a9edbe957dcc715e9f3ebeef04385adc8" protocol=ttrpc version=3 Sep 12 17:32:18.791913 systemd[1]: Started cri-containerd-62d4efef61e37500c07e14b66a78317b21cbb6f2a657d89bacaafd5ad55e00df.scope - libcontainer container 62d4efef61e37500c07e14b66a78317b21cbb6f2a657d89bacaafd5ad55e00df. Sep 12 17:32:18.792978 systemd[1]: Started cri-containerd-74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652.scope - libcontainer container 74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652. Sep 12 17:32:18.831621 containerd[1527]: time="2025-09-12T17:32:18.831583884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-hjpx7,Uid:870ac407-5d29-43d5-8640-e6a08b763722,Namespace:kube-system,Attempt:0,} returns sandbox id \"74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652\"" Sep 12 17:32:18.832275 containerd[1527]: time="2025-09-12T17:32:18.831883780Z" level=info msg="StartContainer for \"62d4efef61e37500c07e14b66a78317b21cbb6f2a657d89bacaafd5ad55e00df\" returns successfully" Sep 12 17:32:18.833442 kubelet[2686]: E0912 17:32:18.833421 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:19.141461 kubelet[2686]: E0912 17:32:19.141390 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:21.978450 kubelet[2686]: E0912 17:32:21.978403 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:21.992946 kubelet[2686]: I0912 17:32:21.992576 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5lbjt" podStartSLOduration=3.992557657 podStartE2EDuration="3.992557657s" podCreationTimestamp="2025-09-12 17:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:32:19.151885939 +0000 UTC m=+8.157857414" watchObservedRunningTime="2025-09-12 17:32:21.992557657 +0000 UTC m=+10.998529132" Sep 12 17:32:22.147008 kubelet[2686]: E0912 17:32:22.146963 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:24.599980 kubelet[2686]: E0912 17:32:24.599947 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:25.555622 update_engine[1498]: I20250912 17:32:25.555536 1498 update_attempter.cc:509] Updating boot flags... Sep 12 17:32:32.147942 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount78945151.mount: Deactivated successfully. Sep 12 17:32:33.428325 containerd[1527]: time="2025-09-12T17:32:33.428264734Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 17:32:33.431053 containerd[1527]: time="2025-09-12T17:32:33.431019509Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 14.690410748s" Sep 12 17:32:33.431053 containerd[1527]: time="2025-09-12T17:32:33.431054675Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 17:32:33.432164 containerd[1527]: time="2025-09-12T17:32:33.432125836Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:33.432982 containerd[1527]: time="2025-09-12T17:32:33.432955441Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:33.441197 containerd[1527]: time="2025-09-12T17:32:33.440953005Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:32:33.451142 containerd[1527]: time="2025-09-12T17:32:33.451103134Z" level=info msg="CreateContainer within sandbox \"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:32:33.457959 containerd[1527]: time="2025-09-12T17:32:33.457915560Z" level=info msg="Container 76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:32:33.460636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount86631765.mount: Deactivated successfully. Sep 12 17:32:33.468342 containerd[1527]: time="2025-09-12T17:32:33.468268080Z" level=info msg="CreateContainer within sandbox \"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337\"" Sep 12 17:32:33.469299 containerd[1527]: time="2025-09-12T17:32:33.469196900Z" level=info msg="StartContainer for \"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337\"" Sep 12 17:32:33.470377 containerd[1527]: time="2025-09-12T17:32:33.470342512Z" level=info msg="connecting to shim 76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337" address="unix:///run/containerd/s/6b810c1a725873c7d4d47b4d555b2f0116f4d5f1059922e599c5ef7378f86fd9" protocol=ttrpc version=3 Sep 12 17:32:33.517990 systemd[1]: Started cri-containerd-76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337.scope - libcontainer container 76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337. Sep 12 17:32:33.548830 containerd[1527]: time="2025-09-12T17:32:33.548789768Z" level=info msg="StartContainer for \"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337\" returns successfully" Sep 12 17:32:33.564848 systemd[1]: cri-containerd-76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337.scope: Deactivated successfully. Sep 12 17:32:33.594646 containerd[1527]: time="2025-09-12T17:32:33.594590586Z" level=info msg="received exit event container_id:\"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337\" id:\"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337\" pid:3129 exited_at:{seconds:1757698353 nanos:585077033}" Sep 12 17:32:33.594858 containerd[1527]: time="2025-09-12T17:32:33.594826982Z" level=info msg="TaskExit event in podsandbox handler container_id:\"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337\" id:\"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337\" pid:3129 exited_at:{seconds:1757698353 nanos:585077033}" Sep 12 17:32:33.648162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337-rootfs.mount: Deactivated successfully. Sep 12 17:32:34.176241 kubelet[2686]: E0912 17:32:34.176209 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:34.181979 containerd[1527]: time="2025-09-12T17:32:34.181841106Z" level=info msg="CreateContainer within sandbox \"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:32:34.193031 containerd[1527]: time="2025-09-12T17:32:34.192953908Z" level=info msg="Container 5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:32:34.198452 containerd[1527]: time="2025-09-12T17:32:34.198395773Z" level=info msg="CreateContainer within sandbox \"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6\"" Sep 12 17:32:34.199495 containerd[1527]: time="2025-09-12T17:32:34.199451005Z" level=info msg="StartContainer for \"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6\"" Sep 12 17:32:34.200456 containerd[1527]: time="2025-09-12T17:32:34.200428746Z" level=info msg="connecting to shim 5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6" address="unix:///run/containerd/s/6b810c1a725873c7d4d47b4d555b2f0116f4d5f1059922e599c5ef7378f86fd9" protocol=ttrpc version=3 Sep 12 17:32:34.228908 systemd[1]: Started cri-containerd-5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6.scope - libcontainer container 5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6. Sep 12 17:32:34.262213 containerd[1527]: time="2025-09-12T17:32:34.262088914Z" level=info msg="StartContainer for \"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6\" returns successfully" Sep 12 17:32:34.271267 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:32:34.271548 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:32:34.271820 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:32:34.273266 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:32:34.274671 systemd[1]: cri-containerd-5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6.scope: Deactivated successfully. Sep 12 17:32:34.277810 containerd[1527]: time="2025-09-12T17:32:34.277745971Z" level=info msg="received exit event container_id:\"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6\" id:\"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6\" pid:3174 exited_at:{seconds:1757698354 nanos:277534901}" Sep 12 17:32:34.278199 containerd[1527]: time="2025-09-12T17:32:34.278152030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6\" id:\"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6\" pid:3174 exited_at:{seconds:1757698354 nanos:277534901}" Sep 12 17:32:34.307803 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:32:35.015439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3906083437.mount: Deactivated successfully. Sep 12 17:32:35.177199 kubelet[2686]: E0912 17:32:35.177164 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:35.182777 containerd[1527]: time="2025-09-12T17:32:35.180957841Z" level=info msg="CreateContainer within sandbox \"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:32:35.211973 containerd[1527]: time="2025-09-12T17:32:35.211233421Z" level=info msg="Container 42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:32:35.226747 containerd[1527]: time="2025-09-12T17:32:35.226699797Z" level=info msg="CreateContainer within sandbox \"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c\"" Sep 12 17:32:35.227288 containerd[1527]: time="2025-09-12T17:32:35.227241592Z" level=info msg="StartContainer for \"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c\"" Sep 12 17:32:35.228839 containerd[1527]: time="2025-09-12T17:32:35.228741959Z" level=info msg="connecting to shim 42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c" address="unix:///run/containerd/s/6b810c1a725873c7d4d47b4d555b2f0116f4d5f1059922e599c5ef7378f86fd9" protocol=ttrpc version=3 Sep 12 17:32:35.257965 systemd[1]: Started cri-containerd-42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c.scope - libcontainer container 42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c. Sep 12 17:32:35.301044 systemd[1]: cri-containerd-42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c.scope: Deactivated successfully. Sep 12 17:32:35.304268 containerd[1527]: time="2025-09-12T17:32:35.304200418Z" level=info msg="received exit event container_id:\"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c\" id:\"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c\" pid:3230 exited_at:{seconds:1757698355 nanos:303588254}" Sep 12 17:32:35.304605 containerd[1527]: time="2025-09-12T17:32:35.304576630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c\" id:\"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c\" pid:3230 exited_at:{seconds:1757698355 nanos:303588254}" Sep 12 17:32:35.309945 containerd[1527]: time="2025-09-12T17:32:35.309910767Z" level=info msg="StartContainer for \"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c\" returns successfully" Sep 12 17:32:35.528913 containerd[1527]: time="2025-09-12T17:32:35.528869561Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:35.531128 containerd[1527]: time="2025-09-12T17:32:35.531089148Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 17:32:35.532007 containerd[1527]: time="2025-09-12T17:32:35.531977590Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:32:35.533858 containerd[1527]: time="2025-09-12T17:32:35.533778639Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.092788467s" Sep 12 17:32:35.533858 containerd[1527]: time="2025-09-12T17:32:35.533816924Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 17:32:35.535992 containerd[1527]: time="2025-09-12T17:32:35.535958940Z" level=info msg="CreateContainer within sandbox \"74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:32:35.544746 containerd[1527]: time="2025-09-12T17:32:35.544700907Z" level=info msg="Container e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:32:35.550517 containerd[1527]: time="2025-09-12T17:32:35.550468383Z" level=info msg="CreateContainer within sandbox \"74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\"" Sep 12 17:32:35.550988 containerd[1527]: time="2025-09-12T17:32:35.550944649Z" level=info msg="StartContainer for \"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\"" Sep 12 17:32:35.552157 containerd[1527]: time="2025-09-12T17:32:35.552084127Z" level=info msg="connecting to shim e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02" address="unix:///run/containerd/s/caf761bcd31e3b4a11727bcd69acef75c1953d2185e8f1caf02d1bd0c12169a7" protocol=ttrpc version=3 Sep 12 17:32:35.583997 systemd[1]: Started cri-containerd-e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02.scope - libcontainer container e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02. Sep 12 17:32:35.653964 containerd[1527]: time="2025-09-12T17:32:35.653925789Z" level=info msg="StartContainer for \"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\" returns successfully" Sep 12 17:32:36.184719 kubelet[2686]: E0912 17:32:36.184472 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:36.192787 kubelet[2686]: E0912 17:32:36.191794 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:36.195564 containerd[1527]: time="2025-09-12T17:32:36.195513825Z" level=info msg="CreateContainer within sandbox \"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:32:36.209980 containerd[1527]: time="2025-09-12T17:32:36.209932814Z" level=info msg="Container 89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:32:36.217698 containerd[1527]: time="2025-09-12T17:32:36.217638394Z" level=info msg="CreateContainer within sandbox \"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4\"" Sep 12 17:32:36.219054 containerd[1527]: time="2025-09-12T17:32:36.219010175Z" level=info msg="StartContainer for \"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4\"" Sep 12 17:32:36.221340 containerd[1527]: time="2025-09-12T17:32:36.221221988Z" level=info msg="connecting to shim 89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4" address="unix:///run/containerd/s/6b810c1a725873c7d4d47b4d555b2f0116f4d5f1059922e599c5ef7378f86fd9" protocol=ttrpc version=3 Sep 12 17:32:36.230785 kubelet[2686]: I0912 17:32:36.230334 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-hjpx7" podStartSLOduration=1.530163969 podStartE2EDuration="18.230315952s" podCreationTimestamp="2025-09-12 17:32:18 +0000 UTC" firstStartedPulling="2025-09-12 17:32:18.834283867 +0000 UTC m=+7.840255342" lastFinishedPulling="2025-09-12 17:32:35.53443589 +0000 UTC m=+24.540407325" observedRunningTime="2025-09-12 17:32:36.202474506 +0000 UTC m=+25.208446021" watchObservedRunningTime="2025-09-12 17:32:36.230315952 +0000 UTC m=+25.236287427" Sep 12 17:32:36.253992 systemd[1]: Started cri-containerd-89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4.scope - libcontainer container 89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4. Sep 12 17:32:36.295016 systemd[1]: cri-containerd-89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4.scope: Deactivated successfully. Sep 12 17:32:36.297654 containerd[1527]: time="2025-09-12T17:32:36.297619823Z" level=info msg="StartContainer for \"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4\" returns successfully" Sep 12 17:32:36.298153 containerd[1527]: time="2025-09-12T17:32:36.298124570Z" level=info msg="received exit event container_id:\"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4\" id:\"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4\" pid:3316 exited_at:{seconds:1757698356 nanos:297630744}" Sep 12 17:32:36.298902 containerd[1527]: time="2025-09-12T17:32:36.298628516Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4\" id:\"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4\" pid:3316 exited_at:{seconds:1757698356 nanos:297630744}" Sep 12 17:32:37.198989 kubelet[2686]: E0912 17:32:37.198956 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:37.199606 kubelet[2686]: E0912 17:32:37.199501 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:37.203347 containerd[1527]: time="2025-09-12T17:32:37.203288608Z" level=info msg="CreateContainer within sandbox \"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:32:37.220958 containerd[1527]: time="2025-09-12T17:32:37.220865561Z" level=info msg="Container 0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:32:37.229366 containerd[1527]: time="2025-09-12T17:32:37.229307074Z" level=info msg="CreateContainer within sandbox \"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\"" Sep 12 17:32:37.229924 containerd[1527]: time="2025-09-12T17:32:37.229892788Z" level=info msg="StartContainer for \"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\"" Sep 12 17:32:37.230902 containerd[1527]: time="2025-09-12T17:32:37.230827827Z" level=info msg="connecting to shim 0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab" address="unix:///run/containerd/s/6b810c1a725873c7d4d47b4d555b2f0116f4d5f1059922e599c5ef7378f86fd9" protocol=ttrpc version=3 Sep 12 17:32:37.258694 systemd[1]: Started cri-containerd-0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab.scope - libcontainer container 0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab. Sep 12 17:32:37.303130 containerd[1527]: time="2025-09-12T17:32:37.303088449Z" level=info msg="StartContainer for \"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\" returns successfully" Sep 12 17:32:37.396884 containerd[1527]: time="2025-09-12T17:32:37.396833560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\" id:\"e5a6ff5d11093ae8297980afab4cee9d0d702314e52d3f29a71be33df1ca09e4\" pid:3384 exited_at:{seconds:1757698357 nanos:396533402}" Sep 12 17:32:37.461906 kubelet[2686]: I0912 17:32:37.461800 2686 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:32:37.525134 systemd[1]: Created slice kubepods-burstable-podc57a2134_15ce_48b5_becd_6d47d9f695c9.slice - libcontainer container kubepods-burstable-podc57a2134_15ce_48b5_becd_6d47d9f695c9.slice. Sep 12 17:32:37.530621 systemd[1]: Created slice kubepods-burstable-podd3b033d2_f165_4cb5_b4b9_f3060e871bfe.slice - libcontainer container kubepods-burstable-podd3b033d2_f165_4cb5_b4b9_f3060e871bfe.slice. Sep 12 17:32:37.577506 kubelet[2686]: I0912 17:32:37.577412 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-68s7x\" (UniqueName: \"kubernetes.io/projected/d3b033d2-f165-4cb5-b4b9-f3060e871bfe-kube-api-access-68s7x\") pod \"coredns-668d6bf9bc-8wr6g\" (UID: \"d3b033d2-f165-4cb5-b4b9-f3060e871bfe\") " pod="kube-system/coredns-668d6bf9bc-8wr6g" Sep 12 17:32:37.577506 kubelet[2686]: I0912 17:32:37.577489 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c57a2134-15ce-48b5-becd-6d47d9f695c9-config-volume\") pod \"coredns-668d6bf9bc-2vvpx\" (UID: \"c57a2134-15ce-48b5-becd-6d47d9f695c9\") " pod="kube-system/coredns-668d6bf9bc-2vvpx" Sep 12 17:32:37.577653 kubelet[2686]: I0912 17:32:37.577537 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7w6t\" (UniqueName: \"kubernetes.io/projected/c57a2134-15ce-48b5-becd-6d47d9f695c9-kube-api-access-m7w6t\") pod \"coredns-668d6bf9bc-2vvpx\" (UID: \"c57a2134-15ce-48b5-becd-6d47d9f695c9\") " pod="kube-system/coredns-668d6bf9bc-2vvpx" Sep 12 17:32:37.577653 kubelet[2686]: I0912 17:32:37.577563 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d3b033d2-f165-4cb5-b4b9-f3060e871bfe-config-volume\") pod \"coredns-668d6bf9bc-8wr6g\" (UID: \"d3b033d2-f165-4cb5-b4b9-f3060e871bfe\") " pod="kube-system/coredns-668d6bf9bc-8wr6g" Sep 12 17:32:37.832588 kubelet[2686]: E0912 17:32:37.832502 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:37.833714 containerd[1527]: time="2025-09-12T17:32:37.833640381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vvpx,Uid:c57a2134-15ce-48b5-becd-6d47d9f695c9,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:37.834652 kubelet[2686]: E0912 17:32:37.834631 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:37.835390 containerd[1527]: time="2025-09-12T17:32:37.835257147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8wr6g,Uid:d3b033d2-f165-4cb5-b4b9-f3060e871bfe,Namespace:kube-system,Attempt:0,}" Sep 12 17:32:38.212090 kubelet[2686]: E0912 17:32:38.211256 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:38.228565 kubelet[2686]: I0912 17:32:38.228491 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4k9nm" podStartSLOduration=5.528915657 podStartE2EDuration="20.228476371s" podCreationTimestamp="2025-09-12 17:32:18 +0000 UTC" firstStartedPulling="2025-09-12 17:32:18.740204553 +0000 UTC m=+7.746175988" lastFinishedPulling="2025-09-12 17:32:33.439765227 +0000 UTC m=+22.445736702" observedRunningTime="2025-09-12 17:32:38.228365438 +0000 UTC m=+27.234336873" watchObservedRunningTime="2025-09-12 17:32:38.228476371 +0000 UTC m=+27.234447846" Sep 12 17:32:39.211316 kubelet[2686]: E0912 17:32:39.211273 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:39.399819 systemd-networkd[1434]: cilium_host: Link UP Sep 12 17:32:39.400400 systemd-networkd[1434]: cilium_net: Link UP Sep 12 17:32:39.400690 systemd-networkd[1434]: cilium_net: Gained carrier Sep 12 17:32:39.400886 systemd-networkd[1434]: cilium_host: Gained carrier Sep 12 17:32:39.485502 systemd-networkd[1434]: cilium_vxlan: Link UP Sep 12 17:32:39.485508 systemd-networkd[1434]: cilium_vxlan: Gained carrier Sep 12 17:32:39.627943 systemd-networkd[1434]: cilium_host: Gained IPv6LL Sep 12 17:32:39.699930 systemd-networkd[1434]: cilium_net: Gained IPv6LL Sep 12 17:32:39.756855 kernel: NET: Registered PF_ALG protocol family Sep 12 17:32:40.213619 kubelet[2686]: E0912 17:32:40.213560 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:40.342587 systemd-networkd[1434]: lxc_health: Link UP Sep 12 17:32:40.343355 systemd-networkd[1434]: lxc_health: Gained carrier Sep 12 17:32:40.909775 kernel: eth0: renamed from tmp49ebf Sep 12 17:32:40.910691 systemd-networkd[1434]: lxc233af704cd45: Link UP Sep 12 17:32:40.912868 systemd-networkd[1434]: lxcd36298800f00: Link UP Sep 12 17:32:40.919783 kernel: eth0: renamed from tmp623a7 Sep 12 17:32:40.921716 systemd-networkd[1434]: lxcd36298800f00: Gained carrier Sep 12 17:32:40.921917 systemd-networkd[1434]: lxc233af704cd45: Gained carrier Sep 12 17:32:41.215239 kubelet[2686]: E0912 17:32:41.215113 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:41.368507 systemd[1]: Started sshd@7-10.0.0.138:22-10.0.0.1:32842.service - OpenSSH per-connection server daemon (10.0.0.1:32842). Sep 12 17:32:41.425063 sshd[3856]: Accepted publickey for core from 10.0.0.1 port 32842 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:32:41.426518 sshd-session[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:41.432602 systemd-logind[1496]: New session 8 of user core. Sep 12 17:32:41.444959 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:32:41.467933 systemd-networkd[1434]: cilium_vxlan: Gained IPv6LL Sep 12 17:32:41.579305 sshd[3859]: Connection closed by 10.0.0.1 port 32842 Sep 12 17:32:41.579658 sshd-session[3856]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:41.584024 systemd[1]: sshd@7-10.0.0.138:22-10.0.0.1:32842.service: Deactivated successfully. Sep 12 17:32:41.589845 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:32:41.594784 systemd-logind[1496]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:32:41.597978 systemd-logind[1496]: Removed session 8. Sep 12 17:32:42.299894 systemd-networkd[1434]: lxcd36298800f00: Gained IPv6LL Sep 12 17:32:42.300253 systemd-networkd[1434]: lxc_health: Gained IPv6LL Sep 12 17:32:42.427948 systemd-networkd[1434]: lxc233af704cd45: Gained IPv6LL Sep 12 17:32:44.590510 containerd[1527]: time="2025-09-12T17:32:44.590419901Z" level=info msg="connecting to shim 49ebf42a993bffc3c1f07085905e8035fef5493d72b79755d5b023764fc6cdeb" address="unix:///run/containerd/s/982325eea181787eae20853999fddbd32a40910368b478f9a5193fdabd05fea4" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:32:44.591476 containerd[1527]: time="2025-09-12T17:32:44.591423480Z" level=info msg="connecting to shim 623a799ecec45335e2f932670daefa1d33f6dec87f19492095a6f76f7a09a8b7" address="unix:///run/containerd/s/9878679ba9cd21467674a87e57c0a4a48fd3aa655ce04770077c92e560ae5160" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:32:44.616969 systemd[1]: Started cri-containerd-623a799ecec45335e2f932670daefa1d33f6dec87f19492095a6f76f7a09a8b7.scope - libcontainer container 623a799ecec45335e2f932670daefa1d33f6dec87f19492095a6f76f7a09a8b7. Sep 12 17:32:44.622317 systemd[1]: Started cri-containerd-49ebf42a993bffc3c1f07085905e8035fef5493d72b79755d5b023764fc6cdeb.scope - libcontainer container 49ebf42a993bffc3c1f07085905e8035fef5493d72b79755d5b023764fc6cdeb. Sep 12 17:32:44.632711 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:32:44.634965 systemd-resolved[1351]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:32:44.658829 containerd[1527]: time="2025-09-12T17:32:44.658705192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8wr6g,Uid:d3b033d2-f165-4cb5-b4b9-f3060e871bfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"623a799ecec45335e2f932670daefa1d33f6dec87f19492095a6f76f7a09a8b7\"" Sep 12 17:32:44.659785 kubelet[2686]: E0912 17:32:44.659549 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:44.660625 containerd[1527]: time="2025-09-12T17:32:44.660589377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-2vvpx,Uid:c57a2134-15ce-48b5-becd-6d47d9f695c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"49ebf42a993bffc3c1f07085905e8035fef5493d72b79755d5b023764fc6cdeb\"" Sep 12 17:32:44.663308 kubelet[2686]: E0912 17:32:44.662876 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:44.665084 containerd[1527]: time="2025-09-12T17:32:44.665033652Z" level=info msg="CreateContainer within sandbox \"49ebf42a993bffc3c1f07085905e8035fef5493d72b79755d5b023764fc6cdeb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:32:44.665520 containerd[1527]: time="2025-09-12T17:32:44.665155064Z" level=info msg="CreateContainer within sandbox \"623a799ecec45335e2f932670daefa1d33f6dec87f19492095a6f76f7a09a8b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:32:44.685020 containerd[1527]: time="2025-09-12T17:32:44.684970806Z" level=info msg="Container e922eac5d69646e1640212357617fe8c943a607a02482e98714e2707016f852f: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:32:44.688164 containerd[1527]: time="2025-09-12T17:32:44.688124195Z" level=info msg="Container 5ad17a0f2a390bb4bc9b433bb0f8d1fbcefa848b59d1609adf63993b9fe61451: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:32:44.698390 containerd[1527]: time="2025-09-12T17:32:44.698343756Z" level=info msg="CreateContainer within sandbox \"623a799ecec45335e2f932670daefa1d33f6dec87f19492095a6f76f7a09a8b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e922eac5d69646e1640212357617fe8c943a607a02482e98714e2707016f852f\"" Sep 12 17:32:44.700392 containerd[1527]: time="2025-09-12T17:32:44.700325471Z" level=info msg="CreateContainer within sandbox \"49ebf42a993bffc3c1f07085905e8035fef5493d72b79755d5b023764fc6cdeb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5ad17a0f2a390bb4bc9b433bb0f8d1fbcefa848b59d1609adf63993b9fe61451\"" Sep 12 17:32:44.701897 containerd[1527]: time="2025-09-12T17:32:44.700446962Z" level=info msg="StartContainer for \"e922eac5d69646e1640212357617fe8c943a607a02482e98714e2707016f852f\"" Sep 12 17:32:44.701897 containerd[1527]: time="2025-09-12T17:32:44.701333169Z" level=info msg="connecting to shim e922eac5d69646e1640212357617fe8c943a607a02482e98714e2707016f852f" address="unix:///run/containerd/s/9878679ba9cd21467674a87e57c0a4a48fd3aa655ce04770077c92e560ae5160" protocol=ttrpc version=3 Sep 12 17:32:44.702031 containerd[1527]: time="2025-09-12T17:32:44.701972352Z" level=info msg="StartContainer for \"5ad17a0f2a390bb4bc9b433bb0f8d1fbcefa848b59d1609adf63993b9fe61451\"" Sep 12 17:32:44.704257 containerd[1527]: time="2025-09-12T17:32:44.704206571Z" level=info msg="connecting to shim 5ad17a0f2a390bb4bc9b433bb0f8d1fbcefa848b59d1609adf63993b9fe61451" address="unix:///run/containerd/s/982325eea181787eae20853999fddbd32a40910368b478f9a5193fdabd05fea4" protocol=ttrpc version=3 Sep 12 17:32:44.737995 systemd[1]: Started cri-containerd-5ad17a0f2a390bb4bc9b433bb0f8d1fbcefa848b59d1609adf63993b9fe61451.scope - libcontainer container 5ad17a0f2a390bb4bc9b433bb0f8d1fbcefa848b59d1609adf63993b9fe61451. Sep 12 17:32:44.739107 systemd[1]: Started cri-containerd-e922eac5d69646e1640212357617fe8c943a607a02482e98714e2707016f852f.scope - libcontainer container e922eac5d69646e1640212357617fe8c943a607a02482e98714e2707016f852f. Sep 12 17:32:44.776144 containerd[1527]: time="2025-09-12T17:32:44.776097055Z" level=info msg="StartContainer for \"e922eac5d69646e1640212357617fe8c943a607a02482e98714e2707016f852f\" returns successfully" Sep 12 17:32:44.798327 containerd[1527]: time="2025-09-12T17:32:44.798286989Z" level=info msg="StartContainer for \"5ad17a0f2a390bb4bc9b433bb0f8d1fbcefa848b59d1609adf63993b9fe61451\" returns successfully" Sep 12 17:32:45.224668 kubelet[2686]: E0912 17:32:45.224523 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:45.228494 kubelet[2686]: E0912 17:32:45.228436 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:45.241328 kubelet[2686]: I0912 17:32:45.240079 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-2vvpx" podStartSLOduration=27.240058355 podStartE2EDuration="27.240058355s" podCreationTimestamp="2025-09-12 17:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:32:45.238516008 +0000 UTC m=+34.244487483" watchObservedRunningTime="2025-09-12 17:32:45.240058355 +0000 UTC m=+34.246029790" Sep 12 17:32:45.268191 kubelet[2686]: I0912 17:32:45.268134 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8wr6g" podStartSLOduration=27.268115975 podStartE2EDuration="27.268115975s" podCreationTimestamp="2025-09-12 17:32:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:32:45.266727563 +0000 UTC m=+34.272699038" watchObservedRunningTime="2025-09-12 17:32:45.268115975 +0000 UTC m=+34.274087410" Sep 12 17:32:45.276632 kubelet[2686]: I0912 17:32:45.276558 2686 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 12 17:32:45.277960 kubelet[2686]: E0912 17:32:45.277652 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:45.567006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4227086950.mount: Deactivated successfully. Sep 12 17:32:46.230396 kubelet[2686]: E0912 17:32:46.229792 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:46.230396 kubelet[2686]: E0912 17:32:46.229978 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:46.230396 kubelet[2686]: E0912 17:32:46.230262 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:46.598524 systemd[1]: Started sshd@8-10.0.0.138:22-10.0.0.1:32850.service - OpenSSH per-connection server daemon (10.0.0.1:32850). Sep 12 17:32:46.661510 sshd[4053]: Accepted publickey for core from 10.0.0.1 port 32850 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:32:46.663030 sshd-session[4053]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:46.668068 systemd-logind[1496]: New session 9 of user core. Sep 12 17:32:46.677945 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:32:46.796593 sshd[4056]: Connection closed by 10.0.0.1 port 32850 Sep 12 17:32:46.796918 sshd-session[4053]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:46.800493 systemd[1]: sshd@8-10.0.0.138:22-10.0.0.1:32850.service: Deactivated successfully. Sep 12 17:32:46.803706 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:32:46.804778 systemd-logind[1496]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:32:46.806359 systemd-logind[1496]: Removed session 9. Sep 12 17:32:47.231902 kubelet[2686]: E0912 17:32:47.231858 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:47.236292 kubelet[2686]: E0912 17:32:47.235995 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:32:51.813558 systemd[1]: Started sshd@9-10.0.0.138:22-10.0.0.1:60914.service - OpenSSH per-connection server daemon (10.0.0.1:60914). Sep 12 17:32:51.888650 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 60914 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:32:51.890053 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:51.894015 systemd-logind[1496]: New session 10 of user core. Sep 12 17:32:51.909104 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:32:52.036494 sshd[4077]: Connection closed by 10.0.0.1 port 60914 Sep 12 17:32:52.036963 sshd-session[4074]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:52.048860 systemd[1]: sshd@9-10.0.0.138:22-10.0.0.1:60914.service: Deactivated successfully. Sep 12 17:32:52.050392 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:32:52.051609 systemd-logind[1496]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:32:52.054927 systemd[1]: Started sshd@10-10.0.0.138:22-10.0.0.1:60916.service - OpenSSH per-connection server daemon (10.0.0.1:60916). Sep 12 17:32:52.056990 systemd-logind[1496]: Removed session 10. Sep 12 17:32:52.111008 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 60916 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:32:52.112274 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:52.117006 systemd-logind[1496]: New session 11 of user core. Sep 12 17:32:52.125993 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:32:52.309519 sshd[4094]: Connection closed by 10.0.0.1 port 60916 Sep 12 17:32:52.311456 sshd-session[4091]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:52.323937 systemd[1]: sshd@10-10.0.0.138:22-10.0.0.1:60916.service: Deactivated successfully. Sep 12 17:32:52.326726 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:32:52.328610 systemd-logind[1496]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:32:52.332692 systemd-logind[1496]: Removed session 11. Sep 12 17:32:52.335259 systemd[1]: Started sshd@11-10.0.0.138:22-10.0.0.1:60924.service - OpenSSH per-connection server daemon (10.0.0.1:60924). Sep 12 17:32:52.400093 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 60924 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:32:52.401215 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:52.405460 systemd-logind[1496]: New session 12 of user core. Sep 12 17:32:52.418944 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:32:52.537426 sshd[4108]: Connection closed by 10.0.0.1 port 60924 Sep 12 17:32:52.537824 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:52.541111 systemd[1]: sshd@11-10.0.0.138:22-10.0.0.1:60924.service: Deactivated successfully. Sep 12 17:32:52.543201 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:32:52.544993 systemd-logind[1496]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:32:52.546635 systemd-logind[1496]: Removed session 12. Sep 12 17:32:57.552785 systemd[1]: Started sshd@12-10.0.0.138:22-10.0.0.1:60936.service - OpenSSH per-connection server daemon (10.0.0.1:60936). Sep 12 17:32:57.626518 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 60936 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:32:57.627926 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:32:57.634014 systemd-logind[1496]: New session 13 of user core. Sep 12 17:32:57.640971 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:32:57.797232 sshd[4124]: Connection closed by 10.0.0.1 port 60936 Sep 12 17:32:57.797578 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Sep 12 17:32:57.802212 systemd[1]: sshd@12-10.0.0.138:22-10.0.0.1:60936.service: Deactivated successfully. Sep 12 17:32:57.806547 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:32:57.809064 systemd-logind[1496]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:32:57.810263 systemd-logind[1496]: Removed session 13. Sep 12 17:33:02.815641 systemd[1]: Started sshd@13-10.0.0.138:22-10.0.0.1:49576.service - OpenSSH per-connection server daemon (10.0.0.1:49576). Sep 12 17:33:02.885087 sshd[4138]: Accepted publickey for core from 10.0.0.1 port 49576 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:33:02.886801 sshd-session[4138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:02.896187 systemd-logind[1496]: New session 14 of user core. Sep 12 17:33:02.906003 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:33:03.049230 sshd[4141]: Connection closed by 10.0.0.1 port 49576 Sep 12 17:33:03.049787 sshd-session[4138]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:03.062647 systemd[1]: sshd@13-10.0.0.138:22-10.0.0.1:49576.service: Deactivated successfully. Sep 12 17:33:03.064923 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:33:03.068429 systemd-logind[1496]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:33:03.071414 systemd[1]: Started sshd@14-10.0.0.138:22-10.0.0.1:49582.service - OpenSSH per-connection server daemon (10.0.0.1:49582). Sep 12 17:33:03.073454 systemd-logind[1496]: Removed session 14. Sep 12 17:33:03.139015 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 49582 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:33:03.141020 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:03.148727 systemd-logind[1496]: New session 15 of user core. Sep 12 17:33:03.162987 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:33:03.384766 sshd[4158]: Connection closed by 10.0.0.1 port 49582 Sep 12 17:33:03.384982 sshd-session[4155]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:03.399199 systemd[1]: sshd@14-10.0.0.138:22-10.0.0.1:49582.service: Deactivated successfully. Sep 12 17:33:03.403885 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:33:03.406384 systemd-logind[1496]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:33:03.412139 systemd[1]: Started sshd@15-10.0.0.138:22-10.0.0.1:49598.service - OpenSSH per-connection server daemon (10.0.0.1:49598). Sep 12 17:33:03.412888 systemd-logind[1496]: Removed session 15. Sep 12 17:33:03.469419 sshd[4169]: Accepted publickey for core from 10.0.0.1 port 49598 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:33:03.470926 sshd-session[4169]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:03.475306 systemd-logind[1496]: New session 16 of user core. Sep 12 17:33:03.484948 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:33:04.148479 sshd[4172]: Connection closed by 10.0.0.1 port 49598 Sep 12 17:33:04.148860 sshd-session[4169]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:04.160259 systemd[1]: sshd@15-10.0.0.138:22-10.0.0.1:49598.service: Deactivated successfully. Sep 12 17:33:04.163054 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:33:04.164355 systemd-logind[1496]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:33:04.168385 systemd[1]: Started sshd@16-10.0.0.138:22-10.0.0.1:49614.service - OpenSSH per-connection server daemon (10.0.0.1:49614). Sep 12 17:33:04.169762 systemd-logind[1496]: Removed session 16. Sep 12 17:33:04.230233 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 49614 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:33:04.232019 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:04.236537 systemd-logind[1496]: New session 17 of user core. Sep 12 17:33:04.252029 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:33:04.497011 sshd[4194]: Connection closed by 10.0.0.1 port 49614 Sep 12 17:33:04.497997 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:04.509452 systemd[1]: sshd@16-10.0.0.138:22-10.0.0.1:49614.service: Deactivated successfully. Sep 12 17:33:04.512742 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:33:04.515146 systemd-logind[1496]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:33:04.519356 systemd[1]: Started sshd@17-10.0.0.138:22-10.0.0.1:49628.service - OpenSSH per-connection server daemon (10.0.0.1:49628). Sep 12 17:33:04.520008 systemd-logind[1496]: Removed session 17. Sep 12 17:33:04.570702 sshd[4206]: Accepted publickey for core from 10.0.0.1 port 49628 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:33:04.572121 sshd-session[4206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:04.576662 systemd-logind[1496]: New session 18 of user core. Sep 12 17:33:04.590965 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:33:04.701593 sshd[4209]: Connection closed by 10.0.0.1 port 49628 Sep 12 17:33:04.701956 sshd-session[4206]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:04.705492 systemd[1]: sshd@17-10.0.0.138:22-10.0.0.1:49628.service: Deactivated successfully. Sep 12 17:33:04.707193 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:33:04.707922 systemd-logind[1496]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:33:04.708940 systemd-logind[1496]: Removed session 18. Sep 12 17:33:09.722331 systemd[1]: Started sshd@18-10.0.0.138:22-10.0.0.1:49630.service - OpenSSH per-connection server daemon (10.0.0.1:49630). Sep 12 17:33:09.772443 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 49630 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:33:09.774793 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:09.783595 systemd-logind[1496]: New session 19 of user core. Sep 12 17:33:09.797001 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:33:09.927636 sshd[4228]: Connection closed by 10.0.0.1 port 49630 Sep 12 17:33:09.928215 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:09.932125 systemd[1]: sshd@18-10.0.0.138:22-10.0.0.1:49630.service: Deactivated successfully. Sep 12 17:33:09.934180 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:33:09.935238 systemd-logind[1496]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:33:09.936873 systemd-logind[1496]: Removed session 19. Sep 12 17:33:14.951515 systemd[1]: Started sshd@19-10.0.0.138:22-10.0.0.1:55606.service - OpenSSH per-connection server daemon (10.0.0.1:55606). Sep 12 17:33:14.995540 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 55606 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:33:14.996951 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:15.000813 systemd-logind[1496]: New session 20 of user core. Sep 12 17:33:15.010020 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:33:15.140221 sshd[4247]: Connection closed by 10.0.0.1 port 55606 Sep 12 17:33:15.140767 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:15.144730 systemd[1]: sshd@19-10.0.0.138:22-10.0.0.1:55606.service: Deactivated successfully. Sep 12 17:33:15.146532 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:33:15.148856 systemd-logind[1496]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:33:15.149878 systemd-logind[1496]: Removed session 20. Sep 12 17:33:20.156276 systemd[1]: Started sshd@20-10.0.0.138:22-10.0.0.1:39792.service - OpenSSH per-connection server daemon (10.0.0.1:39792). Sep 12 17:33:20.216930 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 39792 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:33:20.218835 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:20.224553 systemd-logind[1496]: New session 21 of user core. Sep 12 17:33:20.234917 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:33:20.362136 sshd[4265]: Connection closed by 10.0.0.1 port 39792 Sep 12 17:33:20.363081 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:20.374174 systemd[1]: sshd@20-10.0.0.138:22-10.0.0.1:39792.service: Deactivated successfully. Sep 12 17:33:20.376208 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:33:20.377283 systemd-logind[1496]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:33:20.380086 systemd[1]: Started sshd@21-10.0.0.138:22-10.0.0.1:39800.service - OpenSSH per-connection server daemon (10.0.0.1:39800). Sep 12 17:33:20.381468 systemd-logind[1496]: Removed session 21. Sep 12 17:33:20.434874 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 39800 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:33:20.436208 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:20.440700 systemd-logind[1496]: New session 22 of user core. Sep 12 17:33:20.454963 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:33:22.307854 containerd[1527]: time="2025-09-12T17:33:22.307319282Z" level=info msg="StopContainer for \"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\" with timeout 30 (s)" Sep 12 17:33:22.308320 containerd[1527]: time="2025-09-12T17:33:22.308303723Z" level=info msg="Stop container \"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\" with signal terminated" Sep 12 17:33:22.321315 systemd[1]: cri-containerd-e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02.scope: Deactivated successfully. Sep 12 17:33:22.324431 containerd[1527]: time="2025-09-12T17:33:22.324325758Z" level=info msg="received exit event container_id:\"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\" id:\"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\" pid:3278 exited_at:{seconds:1757698402 nanos:324070808}" Sep 12 17:33:22.324647 containerd[1527]: time="2025-09-12T17:33:22.324404074Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\" id:\"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\" pid:3278 exited_at:{seconds:1757698402 nanos:324070808}" Sep 12 17:33:22.331771 containerd[1527]: time="2025-09-12T17:33:22.331693021Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:33:22.333763 containerd[1527]: time="2025-09-12T17:33:22.333714419Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\" id:\"b19d3acf77b3f3ec5233d07a323d35ef8d3422581254ea19d09283a9c9d47443\" pid:4307 exited_at:{seconds:1757698402 nanos:333131483}" Sep 12 17:33:22.342291 containerd[1527]: time="2025-09-12T17:33:22.342254995Z" level=info msg="StopContainer for \"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\" with timeout 2 (s)" Sep 12 17:33:22.342760 containerd[1527]: time="2025-09-12T17:33:22.342724577Z" level=info msg="Stop container \"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\" with signal terminated" Sep 12 17:33:22.347980 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02-rootfs.mount: Deactivated successfully. Sep 12 17:33:22.351813 systemd-networkd[1434]: lxc_health: Link DOWN Sep 12 17:33:22.351820 systemd-networkd[1434]: lxc_health: Lost carrier Sep 12 17:33:22.368860 containerd[1527]: time="2025-09-12T17:33:22.368616774Z" level=info msg="StopContainer for \"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\" returns successfully" Sep 12 17:33:22.368864 systemd[1]: cri-containerd-0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab.scope: Deactivated successfully. Sep 12 17:33:22.369179 systemd[1]: cri-containerd-0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab.scope: Consumed 6.339s CPU time, 123.3M memory peak, 144K read from disk, 12.9M written to disk. Sep 12 17:33:22.371340 containerd[1527]: time="2025-09-12T17:33:22.371297986Z" level=info msg="received exit event container_id:\"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\" id:\"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\" pid:3352 exited_at:{seconds:1757698402 nanos:371017517}" Sep 12 17:33:22.371504 containerd[1527]: time="2025-09-12T17:33:22.371478818Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\" id:\"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\" pid:3352 exited_at:{seconds:1757698402 nanos:371017517}" Sep 12 17:33:22.372529 containerd[1527]: time="2025-09-12T17:33:22.372503057Z" level=info msg="StopPodSandbox for \"74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652\"" Sep 12 17:33:22.378689 containerd[1527]: time="2025-09-12T17:33:22.378638370Z" level=info msg="Container to stop \"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:33:22.385498 systemd[1]: cri-containerd-74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652.scope: Deactivated successfully. Sep 12 17:33:22.390594 containerd[1527]: time="2025-09-12T17:33:22.390548090Z" level=info msg="TaskExit event in podsandbox handler container_id:\"74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652\" id:\"74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652\" pid:2915 exit_status:137 exited_at:{seconds:1757698402 nanos:390121228}" Sep 12 17:33:22.393723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab-rootfs.mount: Deactivated successfully. Sep 12 17:33:22.401183 containerd[1527]: time="2025-09-12T17:33:22.401063747Z" level=info msg="StopContainer for \"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\" returns successfully" Sep 12 17:33:22.401630 containerd[1527]: time="2025-09-12T17:33:22.401595805Z" level=info msg="StopPodSandbox for \"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\"" Sep 12 17:33:22.401686 containerd[1527]: time="2025-09-12T17:33:22.401668803Z" level=info msg="Container to stop \"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:33:22.401716 containerd[1527]: time="2025-09-12T17:33:22.401686642Z" level=info msg="Container to stop \"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:33:22.401716 containerd[1527]: time="2025-09-12T17:33:22.401697401Z" level=info msg="Container to stop \"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:33:22.401716 containerd[1527]: time="2025-09-12T17:33:22.401706001Z" level=info msg="Container to stop \"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:33:22.401716 containerd[1527]: time="2025-09-12T17:33:22.401714521Z" level=info msg="Container to stop \"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:33:22.408170 systemd[1]: cri-containerd-dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550.scope: Deactivated successfully. Sep 12 17:33:22.421960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652-rootfs.mount: Deactivated successfully. Sep 12 17:33:22.431236 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550-rootfs.mount: Deactivated successfully. Sep 12 17:33:22.432711 containerd[1527]: time="2025-09-12T17:33:22.432646475Z" level=info msg="shim disconnected" id=74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652 namespace=k8s.io Sep 12 17:33:22.432711 containerd[1527]: time="2025-09-12T17:33:22.432683713Z" level=warning msg="cleaning up after shim disconnected" id=74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652 namespace=k8s.io Sep 12 17:33:22.432711 containerd[1527]: time="2025-09-12T17:33:22.432713272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:33:22.432898 containerd[1527]: time="2025-09-12T17:33:22.432726432Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\" id:\"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\" pid:2844 exit_status:137 exited_at:{seconds:1757698402 nanos:408469209}" Sep 12 17:33:22.435602 containerd[1527]: time="2025-09-12T17:33:22.433373486Z" level=info msg="received exit event sandbox_id:\"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\" exit_status:137 exited_at:{seconds:1757698402 nanos:408469209}" Sep 12 17:33:22.435602 containerd[1527]: time="2025-09-12T17:33:22.433590077Z" level=info msg="received exit event sandbox_id:\"74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652\" exit_status:137 exited_at:{seconds:1757698402 nanos:390121228}" Sep 12 17:33:22.435017 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652-shm.mount: Deactivated successfully. Sep 12 17:33:22.436076 containerd[1527]: time="2025-09-12T17:33:22.436033818Z" level=info msg="TearDown network for sandbox \"74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652\" successfully" Sep 12 17:33:22.436076 containerd[1527]: time="2025-09-12T17:33:22.436065897Z" level=info msg="StopPodSandbox for \"74ab9872f9e3253c7106118343ab7b4aaed414e5b93fe4a7ba3a5e96a5f77652\" returns successfully" Sep 12 17:33:22.436641 containerd[1527]: time="2025-09-12T17:33:22.436609075Z" level=info msg="shim disconnected" id=dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550 namespace=k8s.io Sep 12 17:33:22.436712 containerd[1527]: time="2025-09-12T17:33:22.436636634Z" level=warning msg="cleaning up after shim disconnected" id=dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550 namespace=k8s.io Sep 12 17:33:22.436712 containerd[1527]: time="2025-09-12T17:33:22.436666593Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:33:22.437027 containerd[1527]: time="2025-09-12T17:33:22.436956221Z" level=info msg="TearDown network for sandbox \"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\" successfully" Sep 12 17:33:22.437027 containerd[1527]: time="2025-09-12T17:33:22.436985500Z" level=info msg="StopPodSandbox for \"dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550\" returns successfully" Sep 12 17:33:22.485264 kubelet[2686]: I0912 17:33:22.485210 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-bpf-maps\") pod \"ba139a6d-0a33-4a3d-a8da-792686278fe8\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " Sep 12 17:33:22.485264 kubelet[2686]: I0912 17:33:22.485258 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-cni-path\") pod \"ba139a6d-0a33-4a3d-a8da-792686278fe8\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " Sep 12 17:33:22.485264 kubelet[2686]: I0912 17:33:22.485273 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-hostproc\") pod \"ba139a6d-0a33-4a3d-a8da-792686278fe8\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " Sep 12 17:33:22.485701 kubelet[2686]: I0912 17:33:22.485295 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba139a6d-0a33-4a3d-a8da-792686278fe8-cilium-config-path\") pod \"ba139a6d-0a33-4a3d-a8da-792686278fe8\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " Sep 12 17:33:22.485701 kubelet[2686]: I0912 17:33:22.485312 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-host-proc-sys-net\") pod \"ba139a6d-0a33-4a3d-a8da-792686278fe8\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " Sep 12 17:33:22.485701 kubelet[2686]: I0912 17:33:22.485325 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-lib-modules\") pod \"ba139a6d-0a33-4a3d-a8da-792686278fe8\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " Sep 12 17:33:22.485701 kubelet[2686]: I0912 17:33:22.485338 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-xtables-lock\") pod \"ba139a6d-0a33-4a3d-a8da-792686278fe8\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " Sep 12 17:33:22.485701 kubelet[2686]: I0912 17:33:22.485355 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5tkdv\" (UniqueName: \"kubernetes.io/projected/ba139a6d-0a33-4a3d-a8da-792686278fe8-kube-api-access-5tkdv\") pod \"ba139a6d-0a33-4a3d-a8da-792686278fe8\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " Sep 12 17:33:22.485701 kubelet[2686]: I0912 17:33:22.485369 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-host-proc-sys-kernel\") pod \"ba139a6d-0a33-4a3d-a8da-792686278fe8\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " Sep 12 17:33:22.485872 kubelet[2686]: I0912 17:33:22.485386 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ba139a6d-0a33-4a3d-a8da-792686278fe8-hubble-tls\") pod \"ba139a6d-0a33-4a3d-a8da-792686278fe8\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " Sep 12 17:33:22.485872 kubelet[2686]: I0912 17:33:22.485407 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ba139a6d-0a33-4a3d-a8da-792686278fe8-clustermesh-secrets\") pod \"ba139a6d-0a33-4a3d-a8da-792686278fe8\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " Sep 12 17:33:22.485872 kubelet[2686]: I0912 17:33:22.485423 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-cilium-run\") pod \"ba139a6d-0a33-4a3d-a8da-792686278fe8\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " Sep 12 17:33:22.485872 kubelet[2686]: I0912 17:33:22.485468 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nb9k\" (UniqueName: \"kubernetes.io/projected/870ac407-5d29-43d5-8640-e6a08b763722-kube-api-access-4nb9k\") pod \"870ac407-5d29-43d5-8640-e6a08b763722\" (UID: \"870ac407-5d29-43d5-8640-e6a08b763722\") " Sep 12 17:33:22.485872 kubelet[2686]: I0912 17:33:22.485485 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-etc-cni-netd\") pod \"ba139a6d-0a33-4a3d-a8da-792686278fe8\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " Sep 12 17:33:22.485872 kubelet[2686]: I0912 17:33:22.485499 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-cilium-cgroup\") pod \"ba139a6d-0a33-4a3d-a8da-792686278fe8\" (UID: \"ba139a6d-0a33-4a3d-a8da-792686278fe8\") " Sep 12 17:33:22.486011 kubelet[2686]: I0912 17:33:22.485515 2686 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/870ac407-5d29-43d5-8640-e6a08b763722-cilium-config-path\") pod \"870ac407-5d29-43d5-8640-e6a08b763722\" (UID: \"870ac407-5d29-43d5-8640-e6a08b763722\") " Sep 12 17:33:22.486317 kubelet[2686]: I0912 17:33:22.486166 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ba139a6d-0a33-4a3d-a8da-792686278fe8" (UID: "ba139a6d-0a33-4a3d-a8da-792686278fe8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:33:22.486317 kubelet[2686]: I0912 17:33:22.486234 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ba139a6d-0a33-4a3d-a8da-792686278fe8" (UID: "ba139a6d-0a33-4a3d-a8da-792686278fe8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:33:22.486317 kubelet[2686]: I0912 17:33:22.486251 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ba139a6d-0a33-4a3d-a8da-792686278fe8" (UID: "ba139a6d-0a33-4a3d-a8da-792686278fe8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:33:22.487121 kubelet[2686]: I0912 17:33:22.487093 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ba139a6d-0a33-4a3d-a8da-792686278fe8" (UID: "ba139a6d-0a33-4a3d-a8da-792686278fe8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:33:22.487787 kubelet[2686]: I0912 17:33:22.487622 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-hostproc" (OuterVolumeSpecName: "hostproc") pod "ba139a6d-0a33-4a3d-a8da-792686278fe8" (UID: "ba139a6d-0a33-4a3d-a8da-792686278fe8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:33:22.487787 kubelet[2686]: I0912 17:33:22.487432 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/870ac407-5d29-43d5-8640-e6a08b763722-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "870ac407-5d29-43d5-8640-e6a08b763722" (UID: "870ac407-5d29-43d5-8640-e6a08b763722"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:33:22.487787 kubelet[2686]: I0912 17:33:22.487528 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba139a6d-0a33-4a3d-a8da-792686278fe8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ba139a6d-0a33-4a3d-a8da-792686278fe8" (UID: "ba139a6d-0a33-4a3d-a8da-792686278fe8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:33:22.487787 kubelet[2686]: I0912 17:33:22.487600 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ba139a6d-0a33-4a3d-a8da-792686278fe8" (UID: "ba139a6d-0a33-4a3d-a8da-792686278fe8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:33:22.487787 kubelet[2686]: I0912 17:33:22.487613 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-cni-path" (OuterVolumeSpecName: "cni-path") pod "ba139a6d-0a33-4a3d-a8da-792686278fe8" (UID: "ba139a6d-0a33-4a3d-a8da-792686278fe8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:33:22.487959 kubelet[2686]: I0912 17:33:22.487682 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ba139a6d-0a33-4a3d-a8da-792686278fe8" (UID: "ba139a6d-0a33-4a3d-a8da-792686278fe8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:33:22.487959 kubelet[2686]: I0912 17:33:22.487699 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ba139a6d-0a33-4a3d-a8da-792686278fe8" (UID: "ba139a6d-0a33-4a3d-a8da-792686278fe8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:33:22.487959 kubelet[2686]: I0912 17:33:22.487724 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ba139a6d-0a33-4a3d-a8da-792686278fe8" (UID: "ba139a6d-0a33-4a3d-a8da-792686278fe8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:33:22.489211 kubelet[2686]: I0912 17:33:22.489113 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba139a6d-0a33-4a3d-a8da-792686278fe8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ba139a6d-0a33-4a3d-a8da-792686278fe8" (UID: "ba139a6d-0a33-4a3d-a8da-792686278fe8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:33:22.489446 kubelet[2686]: I0912 17:33:22.489423 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ba139a6d-0a33-4a3d-a8da-792686278fe8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ba139a6d-0a33-4a3d-a8da-792686278fe8" (UID: "ba139a6d-0a33-4a3d-a8da-792686278fe8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:33:22.489652 kubelet[2686]: I0912 17:33:22.489628 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba139a6d-0a33-4a3d-a8da-792686278fe8-kube-api-access-5tkdv" (OuterVolumeSpecName: "kube-api-access-5tkdv") pod "ba139a6d-0a33-4a3d-a8da-792686278fe8" (UID: "ba139a6d-0a33-4a3d-a8da-792686278fe8"). InnerVolumeSpecName "kube-api-access-5tkdv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:33:22.490150 kubelet[2686]: I0912 17:33:22.490125 2686 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/870ac407-5d29-43d5-8640-e6a08b763722-kube-api-access-4nb9k" (OuterVolumeSpecName: "kube-api-access-4nb9k") pod "870ac407-5d29-43d5-8640-e6a08b763722" (UID: "870ac407-5d29-43d5-8640-e6a08b763722"). InnerVolumeSpecName "kube-api-access-4nb9k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:33:22.586126 kubelet[2686]: I0912 17:33:22.586060 2686 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ba139a6d-0a33-4a3d-a8da-792686278fe8-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:22.586126 kubelet[2686]: I0912 17:33:22.586097 2686 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:22.586126 kubelet[2686]: I0912 17:33:22.586109 2686 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ba139a6d-0a33-4a3d-a8da-792686278fe8-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:22.586126 kubelet[2686]: I0912 17:33:22.586118 2686 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:22.586126 kubelet[2686]: I0912 17:33:22.586129 2686 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4nb9k\" (UniqueName: \"kubernetes.io/projected/870ac407-5d29-43d5-8640-e6a08b763722-kube-api-access-4nb9k\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:22.586126 kubelet[2686]: I0912 17:33:22.586137 2686 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/870ac407-5d29-43d5-8640-e6a08b763722-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:22.586417 kubelet[2686]: I0912 17:33:22.586145 2686 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:22.586417 kubelet[2686]: I0912 17:33:22.586153 2686 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:22.586417 kubelet[2686]: I0912 17:33:22.586162 2686 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:22.586417 kubelet[2686]: I0912 17:33:22.586170 2686 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:22.586417 kubelet[2686]: I0912 17:33:22.586177 2686 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:22.586417 kubelet[2686]: I0912 17:33:22.586185 2686 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ba139a6d-0a33-4a3d-a8da-792686278fe8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:22.586417 kubelet[2686]: I0912 17:33:22.586192 2686 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:22.586417 kubelet[2686]: I0912 17:33:22.586201 2686 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5tkdv\" (UniqueName: \"kubernetes.io/projected/ba139a6d-0a33-4a3d-a8da-792686278fe8-kube-api-access-5tkdv\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:22.586573 kubelet[2686]: I0912 17:33:22.586210 2686 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:22.586573 kubelet[2686]: I0912 17:33:22.586217 2686 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba139a6d-0a33-4a3d-a8da-792686278fe8-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 17:33:23.122620 systemd[1]: Removed slice kubepods-burstable-podba139a6d_0a33_4a3d_a8da_792686278fe8.slice - libcontainer container kubepods-burstable-podba139a6d_0a33_4a3d_a8da_792686278fe8.slice. Sep 12 17:33:23.122711 systemd[1]: kubepods-burstable-podba139a6d_0a33_4a3d_a8da_792686278fe8.slice: Consumed 6.428s CPU time, 123.6M memory peak, 1.1M read from disk, 12.9M written to disk. Sep 12 17:33:23.124908 systemd[1]: Removed slice kubepods-besteffort-pod870ac407_5d29_43d5_8640_e6a08b763722.slice - libcontainer container kubepods-besteffort-pod870ac407_5d29_43d5_8640_e6a08b763722.slice. Sep 12 17:33:23.324651 kubelet[2686]: I0912 17:33:23.324618 2686 scope.go:117] "RemoveContainer" containerID="e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02" Sep 12 17:33:23.327341 containerd[1527]: time="2025-09-12T17:33:23.327300378Z" level=info msg="RemoveContainer for \"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\"" Sep 12 17:33:23.338999 containerd[1527]: time="2025-09-12T17:33:23.338939137Z" level=info msg="RemoveContainer for \"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\" returns successfully" Sep 12 17:33:23.339820 containerd[1527]: time="2025-09-12T17:33:23.339517035Z" level=error msg="ContainerStatus for \"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\": not found" Sep 12 17:33:23.339872 kubelet[2686]: I0912 17:33:23.339175 2686 scope.go:117] "RemoveContainer" containerID="e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02" Sep 12 17:33:23.340890 kubelet[2686]: E0912 17:33:23.340839 2686 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\": not found" containerID="e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02" Sep 12 17:33:23.344857 kubelet[2686]: I0912 17:33:23.344744 2686 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02"} err="failed to get container status \"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\": rpc error: code = NotFound desc = an error occurred when try to find container \"e21f6ffca80c085251c5655a34c0add9c6ed6780d5a7f445d48159174be54c02\": not found" Sep 12 17:33:23.344857 kubelet[2686]: I0912 17:33:23.344860 2686 scope.go:117] "RemoveContainer" containerID="0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab" Sep 12 17:33:23.348274 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dee8fbbd80cf0fbf24cb306e425dd0b72f0ba7bb7f5f6d1f2ac709c19958b550-shm.mount: Deactivated successfully. Sep 12 17:33:23.348384 systemd[1]: var-lib-kubelet-pods-870ac407\x2d5d29\x2d43d5\x2d8640\x2de6a08b763722-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4nb9k.mount: Deactivated successfully. Sep 12 17:33:23.348454 systemd[1]: var-lib-kubelet-pods-ba139a6d\x2d0a33\x2d4a3d\x2da8da\x2d792686278fe8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5tkdv.mount: Deactivated successfully. Sep 12 17:33:23.348504 systemd[1]: var-lib-kubelet-pods-ba139a6d\x2d0a33\x2d4a3d\x2da8da\x2d792686278fe8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:33:23.348552 systemd[1]: var-lib-kubelet-pods-ba139a6d\x2d0a33\x2d4a3d\x2da8da\x2d792686278fe8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:33:23.350513 containerd[1527]: time="2025-09-12T17:33:23.350081834Z" level=info msg="RemoveContainer for \"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\"" Sep 12 17:33:23.360364 containerd[1527]: time="2025-09-12T17:33:23.360317326Z" level=info msg="RemoveContainer for \"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\" returns successfully" Sep 12 17:33:23.360609 kubelet[2686]: I0912 17:33:23.360529 2686 scope.go:117] "RemoveContainer" containerID="89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4" Sep 12 17:33:23.362312 containerd[1527]: time="2025-09-12T17:33:23.362286372Z" level=info msg="RemoveContainer for \"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4\"" Sep 12 17:33:23.365984 containerd[1527]: time="2025-09-12T17:33:23.365909435Z" level=info msg="RemoveContainer for \"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4\" returns successfully" Sep 12 17:33:23.366115 kubelet[2686]: I0912 17:33:23.366071 2686 scope.go:117] "RemoveContainer" containerID="42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c" Sep 12 17:33:23.368275 containerd[1527]: time="2025-09-12T17:33:23.368239626Z" level=info msg="RemoveContainer for \"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c\"" Sep 12 17:33:23.371364 containerd[1527]: time="2025-09-12T17:33:23.371325709Z" level=info msg="RemoveContainer for \"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c\" returns successfully" Sep 12 17:33:23.371476 kubelet[2686]: I0912 17:33:23.371456 2686 scope.go:117] "RemoveContainer" containerID="5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6" Sep 12 17:33:23.372976 containerd[1527]: time="2025-09-12T17:33:23.372894970Z" level=info msg="RemoveContainer for \"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6\"" Sep 12 17:33:23.377158 containerd[1527]: time="2025-09-12T17:33:23.377127169Z" level=info msg="RemoveContainer for \"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6\" returns successfully" Sep 12 17:33:23.377348 kubelet[2686]: I0912 17:33:23.377301 2686 scope.go:117] "RemoveContainer" containerID="76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337" Sep 12 17:33:23.378623 containerd[1527]: time="2025-09-12T17:33:23.378585394Z" level=info msg="RemoveContainer for \"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337\"" Sep 12 17:33:23.381880 containerd[1527]: time="2025-09-12T17:33:23.381855270Z" level=info msg="RemoveContainer for \"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337\" returns successfully" Sep 12 17:33:23.382107 kubelet[2686]: I0912 17:33:23.382025 2686 scope.go:117] "RemoveContainer" containerID="0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab" Sep 12 17:33:23.382313 containerd[1527]: time="2025-09-12T17:33:23.382282334Z" level=error msg="ContainerStatus for \"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\": not found" Sep 12 17:33:23.382419 kubelet[2686]: E0912 17:33:23.382396 2686 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\": not found" containerID="0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab" Sep 12 17:33:23.382453 kubelet[2686]: I0912 17:33:23.382427 2686 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab"} err="failed to get container status \"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"0d2ac6fc6076671023b720669e6aa84583e9a7d609b89e413a89e39005f054ab\": not found" Sep 12 17:33:23.382453 kubelet[2686]: I0912 17:33:23.382451 2686 scope.go:117] "RemoveContainer" containerID="89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4" Sep 12 17:33:23.382651 containerd[1527]: time="2025-09-12T17:33:23.382612962Z" level=error msg="ContainerStatus for \"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4\": not found" Sep 12 17:33:23.382787 kubelet[2686]: E0912 17:33:23.382764 2686 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4\": not found" containerID="89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4" Sep 12 17:33:23.382826 kubelet[2686]: I0912 17:33:23.382792 2686 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4"} err="failed to get container status \"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4\": rpc error: code = NotFound desc = an error occurred when try to find container \"89bfacdb74dc827114a33a1c06c7bfd9db3c22bdedd11d107268bee59028b5b4\": not found" Sep 12 17:33:23.382826 kubelet[2686]: I0912 17:33:23.382812 2686 scope.go:117] "RemoveContainer" containerID="42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c" Sep 12 17:33:23.383077 containerd[1527]: time="2025-09-12T17:33:23.383049105Z" level=error msg="ContainerStatus for \"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c\": not found" Sep 12 17:33:23.383182 kubelet[2686]: E0912 17:33:23.383160 2686 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c\": not found" containerID="42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c" Sep 12 17:33:23.383215 kubelet[2686]: I0912 17:33:23.383189 2686 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c"} err="failed to get container status \"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c\": rpc error: code = NotFound desc = an error occurred when try to find container \"42b804bc55452b96fbb02bb569153e7d4ce64c603580f89d1eb29c8ec62b369c\": not found" Sep 12 17:33:23.383215 kubelet[2686]: I0912 17:33:23.383207 2686 scope.go:117] "RemoveContainer" containerID="5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6" Sep 12 17:33:23.383408 containerd[1527]: time="2025-09-12T17:33:23.383378653Z" level=error msg="ContainerStatus for \"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6\": not found" Sep 12 17:33:23.383530 kubelet[2686]: E0912 17:33:23.383509 2686 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6\": not found" containerID="5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6" Sep 12 17:33:23.383573 kubelet[2686]: I0912 17:33:23.383535 2686 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6"} err="failed to get container status \"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ff17a7c0a5c8aeda1ff43c52be9ad778d02ef127858da46677aaded9deff9c6\": not found" Sep 12 17:33:23.383607 kubelet[2686]: I0912 17:33:23.383574 2686 scope.go:117] "RemoveContainer" containerID="76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337" Sep 12 17:33:23.383811 containerd[1527]: time="2025-09-12T17:33:23.383739319Z" level=error msg="ContainerStatus for \"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337\": not found" Sep 12 17:33:23.384008 kubelet[2686]: E0912 17:33:23.383982 2686 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337\": not found" containerID="76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337" Sep 12 17:33:23.384069 kubelet[2686]: I0912 17:33:23.384009 2686 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337"} err="failed to get container status \"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337\": rpc error: code = NotFound desc = an error occurred when try to find container \"76bf337c402c745b78a5490a30e10207478056e685abd89a448f4fe5d001a337\": not found" Sep 12 17:33:24.266681 sshd[4281]: Connection closed by 10.0.0.1 port 39800 Sep 12 17:33:24.268247 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:24.279242 systemd[1]: sshd@21-10.0.0.138:22-10.0.0.1:39800.service: Deactivated successfully. Sep 12 17:33:24.286449 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:33:24.286703 systemd[1]: session-22.scope: Consumed 1.178s CPU time, 24.5M memory peak. Sep 12 17:33:24.287467 systemd-logind[1496]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:33:24.292775 systemd[1]: Started sshd@22-10.0.0.138:22-10.0.0.1:39808.service - OpenSSH per-connection server daemon (10.0.0.1:39808). Sep 12 17:33:24.293629 systemd-logind[1496]: Removed session 22. Sep 12 17:33:24.364223 sshd[4432]: Accepted publickey for core from 10.0.0.1 port 39808 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:33:24.368403 sshd-session[4432]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:24.374906 systemd-logind[1496]: New session 23 of user core. Sep 12 17:33:24.392954 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:33:25.112356 kubelet[2686]: I0912 17:33:25.112274 2686 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="870ac407-5d29-43d5-8640-e6a08b763722" path="/var/lib/kubelet/pods/870ac407-5d29-43d5-8640-e6a08b763722/volumes" Sep 12 17:33:25.113927 kubelet[2686]: I0912 17:33:25.113823 2686 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba139a6d-0a33-4a3d-a8da-792686278fe8" path="/var/lib/kubelet/pods/ba139a6d-0a33-4a3d-a8da-792686278fe8/volumes" Sep 12 17:33:25.590406 sshd[4435]: Connection closed by 10.0.0.1 port 39808 Sep 12 17:33:25.592183 sshd-session[4432]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:25.603117 systemd[1]: sshd@22-10.0.0.138:22-10.0.0.1:39808.service: Deactivated successfully. Sep 12 17:33:25.606883 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:33:25.607216 systemd[1]: session-23.scope: Consumed 1.058s CPU time, 23.8M memory peak. Sep 12 17:33:25.608987 systemd-logind[1496]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:33:25.613121 systemd[1]: Started sshd@23-10.0.0.138:22-10.0.0.1:39820.service - OpenSSH per-connection server daemon (10.0.0.1:39820). Sep 12 17:33:25.617075 systemd-logind[1496]: Removed session 23. Sep 12 17:33:25.620419 kubelet[2686]: I0912 17:33:25.620324 2686 memory_manager.go:355] "RemoveStaleState removing state" podUID="870ac407-5d29-43d5-8640-e6a08b763722" containerName="cilium-operator" Sep 12 17:33:25.620419 kubelet[2686]: I0912 17:33:25.620356 2686 memory_manager.go:355] "RemoveStaleState removing state" podUID="ba139a6d-0a33-4a3d-a8da-792686278fe8" containerName="cilium-agent" Sep 12 17:33:25.635181 systemd[1]: Created slice kubepods-burstable-pod3d753317_ee5e_4a95_91de_df9b088f897c.slice - libcontainer container kubepods-burstable-pod3d753317_ee5e_4a95_91de_df9b088f897c.slice. Sep 12 17:33:25.685627 sshd[4447]: Accepted publickey for core from 10.0.0.1 port 39820 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:33:25.688829 sshd-session[4447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:25.697273 systemd-logind[1496]: New session 24 of user core. Sep 12 17:33:25.706143 kubelet[2686]: I0912 17:33:25.706108 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3d753317-ee5e-4a95-91de-df9b088f897c-xtables-lock\") pod \"cilium-pslkq\" (UID: \"3d753317-ee5e-4a95-91de-df9b088f897c\") " pod="kube-system/cilium-pslkq" Sep 12 17:33:25.706143 kubelet[2686]: I0912 17:33:25.706149 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3d753317-ee5e-4a95-91de-df9b088f897c-clustermesh-secrets\") pod \"cilium-pslkq\" (UID: \"3d753317-ee5e-4a95-91de-df9b088f897c\") " pod="kube-system/cilium-pslkq" Sep 12 17:33:25.706299 kubelet[2686]: I0912 17:33:25.706170 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3d753317-ee5e-4a95-91de-df9b088f897c-cilium-ipsec-secrets\") pod \"cilium-pslkq\" (UID: \"3d753317-ee5e-4a95-91de-df9b088f897c\") " pod="kube-system/cilium-pslkq" Sep 12 17:33:25.706299 kubelet[2686]: I0912 17:33:25.706186 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3d753317-ee5e-4a95-91de-df9b088f897c-hostproc\") pod \"cilium-pslkq\" (UID: \"3d753317-ee5e-4a95-91de-df9b088f897c\") " pod="kube-system/cilium-pslkq" Sep 12 17:33:25.706299 kubelet[2686]: I0912 17:33:25.706203 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3d753317-ee5e-4a95-91de-df9b088f897c-lib-modules\") pod \"cilium-pslkq\" (UID: \"3d753317-ee5e-4a95-91de-df9b088f897c\") " pod="kube-system/cilium-pslkq" Sep 12 17:33:25.706299 kubelet[2686]: I0912 17:33:25.706218 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3d753317-ee5e-4a95-91de-df9b088f897c-etc-cni-netd\") pod \"cilium-pslkq\" (UID: \"3d753317-ee5e-4a95-91de-df9b088f897c\") " pod="kube-system/cilium-pslkq" Sep 12 17:33:25.706299 kubelet[2686]: I0912 17:33:25.706233 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3d753317-ee5e-4a95-91de-df9b088f897c-host-proc-sys-net\") pod \"cilium-pslkq\" (UID: \"3d753317-ee5e-4a95-91de-df9b088f897c\") " pod="kube-system/cilium-pslkq" Sep 12 17:33:25.706299 kubelet[2686]: I0912 17:33:25.706248 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3d753317-ee5e-4a95-91de-df9b088f897c-cilium-cgroup\") pod \"cilium-pslkq\" (UID: \"3d753317-ee5e-4a95-91de-df9b088f897c\") " pod="kube-system/cilium-pslkq" Sep 12 17:33:25.706424 kubelet[2686]: I0912 17:33:25.706270 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2fxt\" (UniqueName: \"kubernetes.io/projected/3d753317-ee5e-4a95-91de-df9b088f897c-kube-api-access-f2fxt\") pod \"cilium-pslkq\" (UID: \"3d753317-ee5e-4a95-91de-df9b088f897c\") " pod="kube-system/cilium-pslkq" Sep 12 17:33:25.706424 kubelet[2686]: I0912 17:33:25.706290 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3d753317-ee5e-4a95-91de-df9b088f897c-hubble-tls\") pod \"cilium-pslkq\" (UID: \"3d753317-ee5e-4a95-91de-df9b088f897c\") " pod="kube-system/cilium-pslkq" Sep 12 17:33:25.706424 kubelet[2686]: I0912 17:33:25.706316 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3d753317-ee5e-4a95-91de-df9b088f897c-bpf-maps\") pod \"cilium-pslkq\" (UID: \"3d753317-ee5e-4a95-91de-df9b088f897c\") " pod="kube-system/cilium-pslkq" Sep 12 17:33:25.706424 kubelet[2686]: I0912 17:33:25.706341 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3d753317-ee5e-4a95-91de-df9b088f897c-cni-path\") pod \"cilium-pslkq\" (UID: \"3d753317-ee5e-4a95-91de-df9b088f897c\") " pod="kube-system/cilium-pslkq" Sep 12 17:33:25.706424 kubelet[2686]: I0912 17:33:25.706358 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3d753317-ee5e-4a95-91de-df9b088f897c-host-proc-sys-kernel\") pod \"cilium-pslkq\" (UID: \"3d753317-ee5e-4a95-91de-df9b088f897c\") " pod="kube-system/cilium-pslkq" Sep 12 17:33:25.706424 kubelet[2686]: I0912 17:33:25.706375 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3d753317-ee5e-4a95-91de-df9b088f897c-cilium-run\") pod \"cilium-pslkq\" (UID: \"3d753317-ee5e-4a95-91de-df9b088f897c\") " pod="kube-system/cilium-pslkq" Sep 12 17:33:25.706540 kubelet[2686]: I0912 17:33:25.706390 2686 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3d753317-ee5e-4a95-91de-df9b088f897c-cilium-config-path\") pod \"cilium-pslkq\" (UID: \"3d753317-ee5e-4a95-91de-df9b088f897c\") " pod="kube-system/cilium-pslkq" Sep 12 17:33:25.706984 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:33:25.757014 sshd[4450]: Connection closed by 10.0.0.1 port 39820 Sep 12 17:33:25.757380 sshd-session[4447]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:25.768425 systemd[1]: sshd@23-10.0.0.138:22-10.0.0.1:39820.service: Deactivated successfully. Sep 12 17:33:25.770731 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:33:25.772156 systemd-logind[1496]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:33:25.774303 systemd[1]: Started sshd@24-10.0.0.138:22-10.0.0.1:39824.service - OpenSSH per-connection server daemon (10.0.0.1:39824). Sep 12 17:33:25.775579 systemd-logind[1496]: Removed session 24. Sep 12 17:33:25.838345 sshd[4457]: Accepted publickey for core from 10.0.0.1 port 39824 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:33:25.839592 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:33:25.843379 systemd-logind[1496]: New session 25 of user core. Sep 12 17:33:25.849948 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:33:25.937545 kubelet[2686]: E0912 17:33:25.937509 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:25.938201 containerd[1527]: time="2025-09-12T17:33:25.938029564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pslkq,Uid:3d753317-ee5e-4a95-91de-df9b088f897c,Namespace:kube-system,Attempt:0,}" Sep 12 17:33:25.964572 containerd[1527]: time="2025-09-12T17:33:25.964409124Z" level=info msg="connecting to shim 8a54467c9deeaa838289661dfc0bc0d57cf64ff0d6f8a8159d7ab1796acd0fdb" address="unix:///run/containerd/s/5b78c559f7fc7dc03292cc84deb48eb3e88449c462686b3455cf01a336a4fa43" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:33:25.989156 systemd[1]: Started cri-containerd-8a54467c9deeaa838289661dfc0bc0d57cf64ff0d6f8a8159d7ab1796acd0fdb.scope - libcontainer container 8a54467c9deeaa838289661dfc0bc0d57cf64ff0d6f8a8159d7ab1796acd0fdb. Sep 12 17:33:26.020923 containerd[1527]: time="2025-09-12T17:33:26.020881962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pslkq,Uid:3d753317-ee5e-4a95-91de-df9b088f897c,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a54467c9deeaa838289661dfc0bc0d57cf64ff0d6f8a8159d7ab1796acd0fdb\"" Sep 12 17:33:26.021688 kubelet[2686]: E0912 17:33:26.021669 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:26.024027 containerd[1527]: time="2025-09-12T17:33:26.023994345Z" level=info msg="CreateContainer within sandbox \"8a54467c9deeaa838289661dfc0bc0d57cf64ff0d6f8a8159d7ab1796acd0fdb\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:33:26.031357 containerd[1527]: time="2025-09-12T17:33:26.031321196Z" level=info msg="Container 22fecf78f6e0bf9ce923629299e6fc04a54c858c7a7e144e451b972c24d46496: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:33:26.037869 containerd[1527]: time="2025-09-12T17:33:26.037534402Z" level=info msg="CreateContainer within sandbox \"8a54467c9deeaa838289661dfc0bc0d57cf64ff0d6f8a8159d7ab1796acd0fdb\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"22fecf78f6e0bf9ce923629299e6fc04a54c858c7a7e144e451b972c24d46496\"" Sep 12 17:33:26.038212 containerd[1527]: time="2025-09-12T17:33:26.038178222Z" level=info msg="StartContainer for \"22fecf78f6e0bf9ce923629299e6fc04a54c858c7a7e144e451b972c24d46496\"" Sep 12 17:33:26.039341 containerd[1527]: time="2025-09-12T17:33:26.039304547Z" level=info msg="connecting to shim 22fecf78f6e0bf9ce923629299e6fc04a54c858c7a7e144e451b972c24d46496" address="unix:///run/containerd/s/5b78c559f7fc7dc03292cc84deb48eb3e88449c462686b3455cf01a336a4fa43" protocol=ttrpc version=3 Sep 12 17:33:26.060982 systemd[1]: Started cri-containerd-22fecf78f6e0bf9ce923629299e6fc04a54c858c7a7e144e451b972c24d46496.scope - libcontainer container 22fecf78f6e0bf9ce923629299e6fc04a54c858c7a7e144e451b972c24d46496. Sep 12 17:33:26.085594 containerd[1527]: time="2025-09-12T17:33:26.085557304Z" level=info msg="StartContainer for \"22fecf78f6e0bf9ce923629299e6fc04a54c858c7a7e144e451b972c24d46496\" returns successfully" Sep 12 17:33:26.094967 systemd[1]: cri-containerd-22fecf78f6e0bf9ce923629299e6fc04a54c858c7a7e144e451b972c24d46496.scope: Deactivated successfully. Sep 12 17:33:26.095842 containerd[1527]: time="2025-09-12T17:33:26.095476475Z" level=info msg="TaskExit event in podsandbox handler container_id:\"22fecf78f6e0bf9ce923629299e6fc04a54c858c7a7e144e451b972c24d46496\" id:\"22fecf78f6e0bf9ce923629299e6fc04a54c858c7a7e144e451b972c24d46496\" pid:4529 exited_at:{seconds:1757698406 nanos:94654221}" Sep 12 17:33:26.096103 containerd[1527]: time="2025-09-12T17:33:26.096026378Z" level=info msg="received exit event container_id:\"22fecf78f6e0bf9ce923629299e6fc04a54c858c7a7e144e451b972c24d46496\" id:\"22fecf78f6e0bf9ce923629299e6fc04a54c858c7a7e144e451b972c24d46496\" pid:4529 exited_at:{seconds:1757698406 nanos:94654221}" Sep 12 17:33:26.165521 kubelet[2686]: E0912 17:33:26.165476 2686 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:33:26.343541 kubelet[2686]: E0912 17:33:26.343091 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:26.346157 containerd[1527]: time="2025-09-12T17:33:26.346047740Z" level=info msg="CreateContainer within sandbox \"8a54467c9deeaa838289661dfc0bc0d57cf64ff0d6f8a8159d7ab1796acd0fdb\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:33:26.357458 containerd[1527]: time="2025-09-12T17:33:26.357409626Z" level=info msg="Container b921b30b2c5fea0c198aef0d0a2b396ad428c2bd5a36e8e3dc81b29b7de7bbc4: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:33:26.366513 containerd[1527]: time="2025-09-12T17:33:26.366470303Z" level=info msg="CreateContainer within sandbox \"8a54467c9deeaa838289661dfc0bc0d57cf64ff0d6f8a8159d7ab1796acd0fdb\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b921b30b2c5fea0c198aef0d0a2b396ad428c2bd5a36e8e3dc81b29b7de7bbc4\"" Sep 12 17:33:26.367010 containerd[1527]: time="2025-09-12T17:33:26.366988007Z" level=info msg="StartContainer for \"b921b30b2c5fea0c198aef0d0a2b396ad428c2bd5a36e8e3dc81b29b7de7bbc4\"" Sep 12 17:33:26.368769 containerd[1527]: time="2025-09-12T17:33:26.368503800Z" level=info msg="connecting to shim b921b30b2c5fea0c198aef0d0a2b396ad428c2bd5a36e8e3dc81b29b7de7bbc4" address="unix:///run/containerd/s/5b78c559f7fc7dc03292cc84deb48eb3e88449c462686b3455cf01a336a4fa43" protocol=ttrpc version=3 Sep 12 17:33:26.395518 systemd[1]: Started cri-containerd-b921b30b2c5fea0c198aef0d0a2b396ad428c2bd5a36e8e3dc81b29b7de7bbc4.scope - libcontainer container b921b30b2c5fea0c198aef0d0a2b396ad428c2bd5a36e8e3dc81b29b7de7bbc4. Sep 12 17:33:26.419792 containerd[1527]: time="2025-09-12T17:33:26.419389093Z" level=info msg="StartContainer for \"b921b30b2c5fea0c198aef0d0a2b396ad428c2bd5a36e8e3dc81b29b7de7bbc4\" returns successfully" Sep 12 17:33:26.425044 systemd[1]: cri-containerd-b921b30b2c5fea0c198aef0d0a2b396ad428c2bd5a36e8e3dc81b29b7de7bbc4.scope: Deactivated successfully. Sep 12 17:33:26.426556 containerd[1527]: time="2025-09-12T17:33:26.426513830Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b921b30b2c5fea0c198aef0d0a2b396ad428c2bd5a36e8e3dc81b29b7de7bbc4\" id:\"b921b30b2c5fea0c198aef0d0a2b396ad428c2bd5a36e8e3dc81b29b7de7bbc4\" pid:4577 exited_at:{seconds:1757698406 nanos:426204480}" Sep 12 17:33:26.427998 containerd[1527]: time="2025-09-12T17:33:26.427972905Z" level=info msg="received exit event container_id:\"b921b30b2c5fea0c198aef0d0a2b396ad428c2bd5a36e8e3dc81b29b7de7bbc4\" id:\"b921b30b2c5fea0c198aef0d0a2b396ad428c2bd5a36e8e3dc81b29b7de7bbc4\" pid:4577 exited_at:{seconds:1757698406 nanos:426204480}" Sep 12 17:33:27.349804 kubelet[2686]: E0912 17:33:27.349466 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:27.354926 containerd[1527]: time="2025-09-12T17:33:27.354878056Z" level=info msg="CreateContainer within sandbox \"8a54467c9deeaa838289661dfc0bc0d57cf64ff0d6f8a8159d7ab1796acd0fdb\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:33:27.370674 containerd[1527]: time="2025-09-12T17:33:27.370629158Z" level=info msg="Container 108a4e64845f5ee8d502a0545effe3f280cb1d8e72ca7d5923ccc0c41586b3eb: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:33:27.390192 containerd[1527]: time="2025-09-12T17:33:27.390137030Z" level=info msg="CreateContainer within sandbox \"8a54467c9deeaa838289661dfc0bc0d57cf64ff0d6f8a8159d7ab1796acd0fdb\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"108a4e64845f5ee8d502a0545effe3f280cb1d8e72ca7d5923ccc0c41586b3eb\"" Sep 12 17:33:27.392655 containerd[1527]: time="2025-09-12T17:33:27.391794502Z" level=info msg="StartContainer for \"108a4e64845f5ee8d502a0545effe3f280cb1d8e72ca7d5923ccc0c41586b3eb\"" Sep 12 17:33:27.393408 containerd[1527]: time="2025-09-12T17:33:27.393383136Z" level=info msg="connecting to shim 108a4e64845f5ee8d502a0545effe3f280cb1d8e72ca7d5923ccc0c41586b3eb" address="unix:///run/containerd/s/5b78c559f7fc7dc03292cc84deb48eb3e88449c462686b3455cf01a336a4fa43" protocol=ttrpc version=3 Sep 12 17:33:27.430965 systemd[1]: Started cri-containerd-108a4e64845f5ee8d502a0545effe3f280cb1d8e72ca7d5923ccc0c41586b3eb.scope - libcontainer container 108a4e64845f5ee8d502a0545effe3f280cb1d8e72ca7d5923ccc0c41586b3eb. Sep 12 17:33:27.472525 systemd[1]: cri-containerd-108a4e64845f5ee8d502a0545effe3f280cb1d8e72ca7d5923ccc0c41586b3eb.scope: Deactivated successfully. Sep 12 17:33:27.475302 containerd[1527]: time="2025-09-12T17:33:27.475136277Z" level=info msg="received exit event container_id:\"108a4e64845f5ee8d502a0545effe3f280cb1d8e72ca7d5923ccc0c41586b3eb\" id:\"108a4e64845f5ee8d502a0545effe3f280cb1d8e72ca7d5923ccc0c41586b3eb\" pid:4622 exited_at:{seconds:1757698407 nanos:474843846}" Sep 12 17:33:27.475418 containerd[1527]: time="2025-09-12T17:33:27.475364791Z" level=info msg="TaskExit event in podsandbox handler container_id:\"108a4e64845f5ee8d502a0545effe3f280cb1d8e72ca7d5923ccc0c41586b3eb\" id:\"108a4e64845f5ee8d502a0545effe3f280cb1d8e72ca7d5923ccc0c41586b3eb\" pid:4622 exited_at:{seconds:1757698407 nanos:474843846}" Sep 12 17:33:27.477016 containerd[1527]: time="2025-09-12T17:33:27.476971424Z" level=info msg="StartContainer for \"108a4e64845f5ee8d502a0545effe3f280cb1d8e72ca7d5923ccc0c41586b3eb\" returns successfully" Sep 12 17:33:27.499806 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-108a4e64845f5ee8d502a0545effe3f280cb1d8e72ca7d5923ccc0c41586b3eb-rootfs.mount: Deactivated successfully. Sep 12 17:33:28.359103 kubelet[2686]: E0912 17:33:28.359035 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:28.365868 containerd[1527]: time="2025-09-12T17:33:28.365047086Z" level=info msg="CreateContainer within sandbox \"8a54467c9deeaa838289661dfc0bc0d57cf64ff0d6f8a8159d7ab1796acd0fdb\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:33:28.376566 containerd[1527]: time="2025-09-12T17:33:28.375777036Z" level=info msg="Container a8361ffb0ad09656db9a93b071d0d5ca7d0fc950682ea539b15df1f704d36afe: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:33:28.385548 containerd[1527]: time="2025-09-12T17:33:28.385427215Z" level=info msg="CreateContainer within sandbox \"8a54467c9deeaa838289661dfc0bc0d57cf64ff0d6f8a8159d7ab1796acd0fdb\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a8361ffb0ad09656db9a93b071d0d5ca7d0fc950682ea539b15df1f704d36afe\"" Sep 12 17:33:28.386255 containerd[1527]: time="2025-09-12T17:33:28.386222793Z" level=info msg="StartContainer for \"a8361ffb0ad09656db9a93b071d0d5ca7d0fc950682ea539b15df1f704d36afe\"" Sep 12 17:33:28.387127 containerd[1527]: time="2025-09-12T17:33:28.387073370Z" level=info msg="connecting to shim a8361ffb0ad09656db9a93b071d0d5ca7d0fc950682ea539b15df1f704d36afe" address="unix:///run/containerd/s/5b78c559f7fc7dc03292cc84deb48eb3e88449c462686b3455cf01a336a4fa43" protocol=ttrpc version=3 Sep 12 17:33:28.419979 systemd[1]: Started cri-containerd-a8361ffb0ad09656db9a93b071d0d5ca7d0fc950682ea539b15df1f704d36afe.scope - libcontainer container a8361ffb0ad09656db9a93b071d0d5ca7d0fc950682ea539b15df1f704d36afe. Sep 12 17:33:28.451651 systemd[1]: cri-containerd-a8361ffb0ad09656db9a93b071d0d5ca7d0fc950682ea539b15df1f704d36afe.scope: Deactivated successfully. Sep 12 17:33:28.452033 containerd[1527]: time="2025-09-12T17:33:28.451914095Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a8361ffb0ad09656db9a93b071d0d5ca7d0fc950682ea539b15df1f704d36afe\" id:\"a8361ffb0ad09656db9a93b071d0d5ca7d0fc950682ea539b15df1f704d36afe\" pid:4662 exited_at:{seconds:1757698408 nanos:451726420}" Sep 12 17:33:28.454427 containerd[1527]: time="2025-09-12T17:33:28.454293831Z" level=info msg="received exit event container_id:\"a8361ffb0ad09656db9a93b071d0d5ca7d0fc950682ea539b15df1f704d36afe\" id:\"a8361ffb0ad09656db9a93b071d0d5ca7d0fc950682ea539b15df1f704d36afe\" pid:4662 exited_at:{seconds:1757698408 nanos:451726420}" Sep 12 17:33:28.457034 containerd[1527]: time="2025-09-12T17:33:28.456884561Z" level=info msg="StartContainer for \"a8361ffb0ad09656db9a93b071d0d5ca7d0fc950682ea539b15df1f704d36afe\" returns successfully" Sep 12 17:33:28.479974 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8361ffb0ad09656db9a93b071d0d5ca7d0fc950682ea539b15df1f704d36afe-rootfs.mount: Deactivated successfully. Sep 12 17:33:29.110038 kubelet[2686]: E0912 17:33:29.109974 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:29.365572 kubelet[2686]: E0912 17:33:29.365392 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:29.368714 containerd[1527]: time="2025-09-12T17:33:29.368659451Z" level=info msg="CreateContainer within sandbox \"8a54467c9deeaa838289661dfc0bc0d57cf64ff0d6f8a8159d7ab1796acd0fdb\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:33:29.382107 containerd[1527]: time="2025-09-12T17:33:29.380815226Z" level=info msg="Container 4fb793579a3559f7e0702b361cb5996902482c2bfc372d1bd156b98b3ce4f20c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:33:29.394886 containerd[1527]: time="2025-09-12T17:33:29.394832195Z" level=info msg="CreateContainer within sandbox \"8a54467c9deeaa838289661dfc0bc0d57cf64ff0d6f8a8159d7ab1796acd0fdb\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4fb793579a3559f7e0702b361cb5996902482c2bfc372d1bd156b98b3ce4f20c\"" Sep 12 17:33:29.395466 containerd[1527]: time="2025-09-12T17:33:29.395422060Z" level=info msg="StartContainer for \"4fb793579a3559f7e0702b361cb5996902482c2bfc372d1bd156b98b3ce4f20c\"" Sep 12 17:33:29.396679 containerd[1527]: time="2025-09-12T17:33:29.396653909Z" level=info msg="connecting to shim 4fb793579a3559f7e0702b361cb5996902482c2bfc372d1bd156b98b3ce4f20c" address="unix:///run/containerd/s/5b78c559f7fc7dc03292cc84deb48eb3e88449c462686b3455cf01a336a4fa43" protocol=ttrpc version=3 Sep 12 17:33:29.417283 systemd[1]: Started cri-containerd-4fb793579a3559f7e0702b361cb5996902482c2bfc372d1bd156b98b3ce4f20c.scope - libcontainer container 4fb793579a3559f7e0702b361cb5996902482c2bfc372d1bd156b98b3ce4f20c. Sep 12 17:33:29.453369 containerd[1527]: time="2025-09-12T17:33:29.453330767Z" level=info msg="StartContainer for \"4fb793579a3559f7e0702b361cb5996902482c2bfc372d1bd156b98b3ce4f20c\" returns successfully" Sep 12 17:33:29.525072 containerd[1527]: time="2025-09-12T17:33:29.525019448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4fb793579a3559f7e0702b361cb5996902482c2bfc372d1bd156b98b3ce4f20c\" id:\"5f42321de30ffc120f3e965ff86aab66cd0f850c8ce7bcf791f0d934ff5d51dc\" pid:4727 exited_at:{seconds:1757698409 nanos:524247467}" Sep 12 17:33:29.749852 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 17:33:30.374252 kubelet[2686]: E0912 17:33:30.374181 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:31.110717 kubelet[2686]: E0912 17:33:31.110417 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:31.939327 kubelet[2686]: E0912 17:33:31.939279 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:32.220648 containerd[1527]: time="2025-09-12T17:33:32.220526207Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4fb793579a3559f7e0702b361cb5996902482c2bfc372d1bd156b98b3ce4f20c\" id:\"dca8a10dfc27de8be9e0ccc58fe1b55342a0260d83d3a290ab75bd8a41f3b01d\" pid:5116 exit_status:1 exited_at:{seconds:1757698412 nanos:220003417}" Sep 12 17:33:32.761731 systemd-networkd[1434]: lxc_health: Link UP Sep 12 17:33:32.764398 systemd-networkd[1434]: lxc_health: Gained carrier Sep 12 17:33:33.941104 kubelet[2686]: E0912 17:33:33.941017 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:33.961477 kubelet[2686]: I0912 17:33:33.960704 2686 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pslkq" podStartSLOduration=8.960690689 podStartE2EDuration="8.960690689s" podCreationTimestamp="2025-09-12 17:33:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:33:30.398175455 +0000 UTC m=+79.404146930" watchObservedRunningTime="2025-09-12 17:33:33.960690689 +0000 UTC m=+82.966662164" Sep 12 17:33:34.382425 kubelet[2686]: E0912 17:33:34.382369 2686 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:33:34.419421 containerd[1527]: time="2025-09-12T17:33:34.419187223Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4fb793579a3559f7e0702b361cb5996902482c2bfc372d1bd156b98b3ce4f20c\" id:\"32ec533e468a385e50433c7d21c6e7760f71c343a8dc3739e5293bc2ec172c55\" pid:5264 exited_at:{seconds:1757698414 nanos:418701071}" Sep 12 17:33:34.523923 systemd-networkd[1434]: lxc_health: Gained IPv6LL Sep 12 17:33:36.546628 containerd[1527]: time="2025-09-12T17:33:36.546580344Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4fb793579a3559f7e0702b361cb5996902482c2bfc372d1bd156b98b3ce4f20c\" id:\"2346c3a1f6caeaca325b8695ef8a8d4e745f941726bc2c994a380a993a686733\" pid:5297 exited_at:{seconds:1757698416 nanos:546104710}" Sep 12 17:33:38.727342 containerd[1527]: time="2025-09-12T17:33:38.727296083Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4fb793579a3559f7e0702b361cb5996902482c2bfc372d1bd156b98b3ce4f20c\" id:\"ed5998abe7a29a57a34b81d5603f23c40ec469679e866c3882b61076ec21eb7d\" pid:5321 exited_at:{seconds:1757698418 nanos:726784488}" Sep 12 17:33:38.731743 sshd[4464]: Connection closed by 10.0.0.1 port 39824 Sep 12 17:33:38.732406 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Sep 12 17:33:38.736084 systemd[1]: sshd@24-10.0.0.138:22-10.0.0.1:39824.service: Deactivated successfully. Sep 12 17:33:38.738344 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:33:38.739759 systemd-logind[1496]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:33:38.741370 systemd-logind[1496]: Removed session 25.