Sep 6 00:04:51.853639 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 6 00:04:51.853663 kernel: Linux version 6.6.103-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 5 22:30:47 -00 2025 Sep 6 00:04:51.853672 kernel: KASLR enabled Sep 6 00:04:51.853678 kernel: efi: EFI v2.7 by EDK II Sep 6 00:04:51.853684 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 6 00:04:51.853689 kernel: random: crng init done Sep 6 00:04:51.853697 kernel: ACPI: Early table checksum verification disabled Sep 6 00:04:51.853702 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 6 00:04:51.853709 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 6 00:04:51.853716 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:04:51.853722 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:04:51.853728 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:04:51.853735 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:04:51.853741 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:04:51.853748 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:04:51.853756 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:04:51.853763 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:04:51.853770 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 6 00:04:51.853776 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 6 00:04:51.853782 kernel: NUMA: Failed to initialise from firmware Sep 6 00:04:51.853789 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:04:51.853796 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Sep 6 00:04:51.853802 kernel: Zone ranges: Sep 6 00:04:51.853808 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:04:51.853814 kernel: DMA32 empty Sep 6 00:04:51.853822 kernel: Normal empty Sep 6 00:04:51.853828 kernel: Movable zone start for each node Sep 6 00:04:51.853834 kernel: Early memory node ranges Sep 6 00:04:51.853841 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 6 00:04:51.853847 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 6 00:04:51.853857 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 6 00:04:51.853863 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 6 00:04:51.853870 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 6 00:04:51.853876 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 6 00:04:51.853883 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 6 00:04:51.853889 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 6 00:04:51.853895 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 6 00:04:51.853903 kernel: psci: probing for conduit method from ACPI. Sep 6 00:04:51.853909 kernel: psci: PSCIv1.1 detected in firmware. Sep 6 00:04:51.853916 kernel: psci: Using standard PSCI v0.2 function IDs Sep 6 00:04:51.853925 kernel: psci: Trusted OS migration not required Sep 6 00:04:51.853941 kernel: psci: SMC Calling Convention v1.1 Sep 6 00:04:51.853949 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 6 00:04:51.853958 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 6 00:04:51.853965 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 6 00:04:51.853972 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 6 00:04:51.853979 kernel: Detected PIPT I-cache on CPU0 Sep 6 00:04:51.853986 kernel: CPU features: detected: GIC system register CPU interface Sep 6 00:04:51.853992 kernel: CPU features: detected: Hardware dirty bit management Sep 6 00:04:51.853999 kernel: CPU features: detected: Spectre-v4 Sep 6 00:04:51.854006 kernel: CPU features: detected: Spectre-BHB Sep 6 00:04:51.854012 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 6 00:04:51.854019 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 6 00:04:51.854027 kernel: CPU features: detected: ARM erratum 1418040 Sep 6 00:04:51.854034 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 6 00:04:51.854041 kernel: alternatives: applying boot alternatives Sep 6 00:04:51.854049 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 6 00:04:51.854056 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 6 00:04:51.854063 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 6 00:04:51.854070 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 6 00:04:51.854077 kernel: Fallback order for Node 0: 0 Sep 6 00:04:51.854084 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 6 00:04:51.854091 kernel: Policy zone: DMA Sep 6 00:04:51.854097 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 6 00:04:51.854105 kernel: software IO TLB: area num 4. Sep 6 00:04:51.854125 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 6 00:04:51.854133 kernel: Memory: 2386404K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185884K reserved, 0K cma-reserved) Sep 6 00:04:51.854141 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 6 00:04:51.854149 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 6 00:04:51.854168 kernel: rcu: RCU event tracing is enabled. Sep 6 00:04:51.854175 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 6 00:04:51.854182 kernel: Trampoline variant of Tasks RCU enabled. Sep 6 00:04:51.854189 kernel: Tracing variant of Tasks RCU enabled. Sep 6 00:04:51.854196 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 6 00:04:51.854203 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 6 00:04:51.854212 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 6 00:04:51.854219 kernel: GICv3: 256 SPIs implemented Sep 6 00:04:51.854226 kernel: GICv3: 0 Extended SPIs implemented Sep 6 00:04:51.854233 kernel: Root IRQ handler: gic_handle_irq Sep 6 00:04:51.854240 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 6 00:04:51.854247 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 6 00:04:51.854254 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 6 00:04:51.854284 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 6 00:04:51.854292 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 6 00:04:51.854299 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 6 00:04:51.854306 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 6 00:04:51.854313 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 6 00:04:51.854321 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:04:51.854328 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 6 00:04:51.854335 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 6 00:04:51.854343 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 6 00:04:51.854350 kernel: arm-pv: using stolen time PV Sep 6 00:04:51.854358 kernel: Console: colour dummy device 80x25 Sep 6 00:04:51.854365 kernel: ACPI: Core revision 20230628 Sep 6 00:04:51.854372 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 6 00:04:51.854379 kernel: pid_max: default: 32768 minimum: 301 Sep 6 00:04:51.854386 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 6 00:04:51.854395 kernel: landlock: Up and running. Sep 6 00:04:51.854402 kernel: SELinux: Initializing. Sep 6 00:04:51.854409 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:04:51.854416 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 6 00:04:51.854423 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 6 00:04:51.854430 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 6 00:04:51.854437 kernel: rcu: Hierarchical SRCU implementation. Sep 6 00:04:51.854443 kernel: rcu: Max phase no-delay instances is 400. Sep 6 00:04:51.854450 kernel: Platform MSI: ITS@0x8080000 domain created Sep 6 00:04:51.854458 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 6 00:04:51.854465 kernel: Remapping and enabling EFI services. Sep 6 00:04:51.854472 kernel: smp: Bringing up secondary CPUs ... Sep 6 00:04:51.854479 kernel: Detected PIPT I-cache on CPU1 Sep 6 00:04:51.854486 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 6 00:04:51.854493 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 6 00:04:51.854500 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:04:51.854507 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 6 00:04:51.854514 kernel: Detected PIPT I-cache on CPU2 Sep 6 00:04:51.854521 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 6 00:04:51.854529 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 6 00:04:51.854536 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:04:51.854555 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 6 00:04:51.854564 kernel: Detected PIPT I-cache on CPU3 Sep 6 00:04:51.854571 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 6 00:04:51.854578 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 6 00:04:51.854586 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 6 00:04:51.854592 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 6 00:04:51.854600 kernel: smp: Brought up 1 node, 4 CPUs Sep 6 00:04:51.854609 kernel: SMP: Total of 4 processors activated. Sep 6 00:04:51.854616 kernel: CPU features: detected: 32-bit EL0 Support Sep 6 00:04:51.854624 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 6 00:04:51.854631 kernel: CPU features: detected: Common not Private translations Sep 6 00:04:51.854638 kernel: CPU features: detected: CRC32 instructions Sep 6 00:04:51.854646 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 6 00:04:51.854653 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 6 00:04:51.854660 kernel: CPU features: detected: LSE atomic instructions Sep 6 00:04:51.854669 kernel: CPU features: detected: Privileged Access Never Sep 6 00:04:51.854676 kernel: CPU features: detected: RAS Extension Support Sep 6 00:04:51.854683 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 6 00:04:51.854690 kernel: CPU: All CPU(s) started at EL1 Sep 6 00:04:51.854697 kernel: alternatives: applying system-wide alternatives Sep 6 00:04:51.854705 kernel: devtmpfs: initialized Sep 6 00:04:51.854712 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 6 00:04:51.854720 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 6 00:04:51.854727 kernel: pinctrl core: initialized pinctrl subsystem Sep 6 00:04:51.854736 kernel: SMBIOS 3.0.0 present. Sep 6 00:04:51.854743 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 6 00:04:51.854751 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 6 00:04:51.854758 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 6 00:04:51.854765 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 6 00:04:51.854773 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 6 00:04:51.854780 kernel: audit: initializing netlink subsys (disabled) Sep 6 00:04:51.854787 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Sep 6 00:04:51.854795 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 6 00:04:51.854803 kernel: cpuidle: using governor menu Sep 6 00:04:51.854811 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 6 00:04:51.854818 kernel: ASID allocator initialised with 32768 entries Sep 6 00:04:51.854825 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 6 00:04:51.854833 kernel: Serial: AMBA PL011 UART driver Sep 6 00:04:51.854840 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 6 00:04:51.854847 kernel: Modules: 0 pages in range for non-PLT usage Sep 6 00:04:51.854855 kernel: Modules: 509008 pages in range for PLT usage Sep 6 00:04:51.854862 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 6 00:04:51.854871 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 6 00:04:51.854878 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 6 00:04:51.854886 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 6 00:04:51.854893 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 6 00:04:51.854900 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 6 00:04:51.854908 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 6 00:04:51.854915 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 6 00:04:51.854923 kernel: ACPI: Added _OSI(Module Device) Sep 6 00:04:51.854930 kernel: ACPI: Added _OSI(Processor Device) Sep 6 00:04:51.854943 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 6 00:04:51.854951 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 6 00:04:51.854958 kernel: ACPI: Interpreter enabled Sep 6 00:04:51.854965 kernel: ACPI: Using GIC for interrupt routing Sep 6 00:04:51.854973 kernel: ACPI: MCFG table detected, 1 entries Sep 6 00:04:51.854980 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 6 00:04:51.854987 kernel: printk: console [ttyAMA0] enabled Sep 6 00:04:51.854995 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 6 00:04:51.855145 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 6 00:04:51.855227 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 6 00:04:51.855295 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 6 00:04:51.855362 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 6 00:04:51.855428 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 6 00:04:51.855438 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 6 00:04:51.855445 kernel: PCI host bridge to bus 0000:00 Sep 6 00:04:51.855520 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 6 00:04:51.855612 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 6 00:04:51.855709 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 6 00:04:51.855773 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 6 00:04:51.855872 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 6 00:04:51.855964 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 6 00:04:51.856039 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 6 00:04:51.856114 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 6 00:04:51.856182 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 6 00:04:51.856250 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 6 00:04:51.856317 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 6 00:04:51.856385 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 6 00:04:51.856448 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 6 00:04:51.856508 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 6 00:04:51.856580 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 6 00:04:51.856592 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 6 00:04:51.856600 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 6 00:04:51.856608 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 6 00:04:51.856615 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 6 00:04:51.856622 kernel: iommu: Default domain type: Translated Sep 6 00:04:51.856630 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 6 00:04:51.856638 kernel: efivars: Registered efivars operations Sep 6 00:04:51.856645 kernel: vgaarb: loaded Sep 6 00:04:51.856655 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 6 00:04:51.856663 kernel: VFS: Disk quotas dquot_6.6.0 Sep 6 00:04:51.856670 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 6 00:04:51.856677 kernel: pnp: PnP ACPI init Sep 6 00:04:51.856752 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 6 00:04:51.856763 kernel: pnp: PnP ACPI: found 1 devices Sep 6 00:04:51.856771 kernel: NET: Registered PF_INET protocol family Sep 6 00:04:51.856778 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 6 00:04:51.856788 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 6 00:04:51.856795 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 6 00:04:51.856803 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 6 00:04:51.856810 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 6 00:04:51.856818 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 6 00:04:51.856825 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:04:51.856833 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 6 00:04:51.856840 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 6 00:04:51.856848 kernel: PCI: CLS 0 bytes, default 64 Sep 6 00:04:51.856856 kernel: kvm [1]: HYP mode not available Sep 6 00:04:51.856864 kernel: Initialise system trusted keyrings Sep 6 00:04:51.856871 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 6 00:04:51.856879 kernel: Key type asymmetric registered Sep 6 00:04:51.856886 kernel: Asymmetric key parser 'x509' registered Sep 6 00:04:51.856893 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 6 00:04:51.856901 kernel: io scheduler mq-deadline registered Sep 6 00:04:51.856908 kernel: io scheduler kyber registered Sep 6 00:04:51.856916 kernel: io scheduler bfq registered Sep 6 00:04:51.856925 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 6 00:04:51.856983 kernel: ACPI: button: Power Button [PWRB] Sep 6 00:04:51.856995 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 6 00:04:51.857078 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 6 00:04:51.857089 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 6 00:04:51.857096 kernel: thunder_xcv, ver 1.0 Sep 6 00:04:51.857103 kernel: thunder_bgx, ver 1.0 Sep 6 00:04:51.857111 kernel: nicpf, ver 1.0 Sep 6 00:04:51.857118 kernel: nicvf, ver 1.0 Sep 6 00:04:51.857200 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 6 00:04:51.857267 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-06T00:04:51 UTC (1757117091) Sep 6 00:04:51.857278 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 6 00:04:51.857285 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 6 00:04:51.857293 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 6 00:04:51.857301 kernel: watchdog: Hard watchdog permanently disabled Sep 6 00:04:51.857308 kernel: NET: Registered PF_INET6 protocol family Sep 6 00:04:51.857316 kernel: Segment Routing with IPv6 Sep 6 00:04:51.857326 kernel: In-situ OAM (IOAM) with IPv6 Sep 6 00:04:51.857333 kernel: NET: Registered PF_PACKET protocol family Sep 6 00:04:51.857341 kernel: Key type dns_resolver registered Sep 6 00:04:51.857348 kernel: registered taskstats version 1 Sep 6 00:04:51.857355 kernel: Loading compiled-in X.509 certificates Sep 6 00:04:51.857363 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.103-flatcar: 5b16e1dfa86dac534548885fd675b87757ff9e20' Sep 6 00:04:51.857370 kernel: Key type .fscrypt registered Sep 6 00:04:51.857377 kernel: Key type fscrypt-provisioning registered Sep 6 00:04:51.857385 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 6 00:04:51.857394 kernel: ima: Allocated hash algorithm: sha1 Sep 6 00:04:51.857402 kernel: ima: No architecture policies found Sep 6 00:04:51.857409 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 6 00:04:51.857416 kernel: clk: Disabling unused clocks Sep 6 00:04:51.857424 kernel: Freeing unused kernel memory: 39424K Sep 6 00:04:51.857431 kernel: Run /init as init process Sep 6 00:04:51.857438 kernel: with arguments: Sep 6 00:04:51.857445 kernel: /init Sep 6 00:04:51.857452 kernel: with environment: Sep 6 00:04:51.857461 kernel: HOME=/ Sep 6 00:04:51.857468 kernel: TERM=linux Sep 6 00:04:51.857476 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 6 00:04:51.857485 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 6 00:04:51.857494 systemd[1]: Detected virtualization kvm. Sep 6 00:04:51.857503 systemd[1]: Detected architecture arm64. Sep 6 00:04:51.857510 systemd[1]: Running in initrd. Sep 6 00:04:51.857518 systemd[1]: No hostname configured, using default hostname. Sep 6 00:04:51.857527 systemd[1]: Hostname set to . Sep 6 00:04:51.857535 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:04:51.857552 systemd[1]: Queued start job for default target initrd.target. Sep 6 00:04:51.857561 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 00:04:51.857569 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 00:04:51.857578 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 6 00:04:51.857586 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 6 00:04:51.857596 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 6 00:04:51.857604 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 6 00:04:51.857614 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 6 00:04:51.857622 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 6 00:04:51.857630 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 00:04:51.857638 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 6 00:04:51.857646 systemd[1]: Reached target paths.target - Path Units. Sep 6 00:04:51.857656 systemd[1]: Reached target slices.target - Slice Units. Sep 6 00:04:51.857664 systemd[1]: Reached target swap.target - Swaps. Sep 6 00:04:51.857672 systemd[1]: Reached target timers.target - Timer Units. Sep 6 00:04:51.857679 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 6 00:04:51.857688 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 6 00:04:51.857696 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 6 00:04:51.857704 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 6 00:04:51.857712 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 6 00:04:51.857720 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 6 00:04:51.857729 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 00:04:51.857737 systemd[1]: Reached target sockets.target - Socket Units. Sep 6 00:04:51.857745 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 6 00:04:51.857753 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 6 00:04:51.857761 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 6 00:04:51.857769 systemd[1]: Starting systemd-fsck-usr.service... Sep 6 00:04:51.857777 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 6 00:04:51.857785 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 6 00:04:51.857794 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:04:51.857802 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 6 00:04:51.857810 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 00:04:51.857818 systemd[1]: Finished systemd-fsck-usr.service. Sep 6 00:04:51.857847 systemd-journald[237]: Collecting audit messages is disabled. Sep 6 00:04:51.857870 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 6 00:04:51.857878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:04:51.857887 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 6 00:04:51.857895 kernel: Bridge firewalling registered Sep 6 00:04:51.857904 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 6 00:04:51.857913 systemd-journald[237]: Journal started Sep 6 00:04:51.857932 systemd-journald[237]: Runtime Journal (/run/log/journal/4ba2ed9da3104c37bb120e50c33a6a2c) is 5.9M, max 47.3M, 41.4M free. Sep 6 00:04:51.840100 systemd-modules-load[239]: Inserted module 'overlay' Sep 6 00:04:51.860128 systemd[1]: Started systemd-journald.service - Journal Service. Sep 6 00:04:51.855758 systemd-modules-load[239]: Inserted module 'br_netfilter' Sep 6 00:04:51.861129 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 6 00:04:51.867311 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 6 00:04:51.869022 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 00:04:51.870998 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 6 00:04:51.873134 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 6 00:04:51.880832 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:04:51.885323 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 00:04:51.889515 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 00:04:51.901115 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 6 00:04:51.902162 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:04:51.905262 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 6 00:04:51.919294 dracut-cmdline[279]: dracut-dracut-053 Sep 6 00:04:51.921931 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=ac831c89fe9ee7829b7371dadfb138f8d0e2b31ae3a5a920e0eba13bbab016c3 Sep 6 00:04:51.930433 systemd-resolved[276]: Positive Trust Anchors: Sep 6 00:04:51.930451 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:04:51.930484 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 6 00:04:51.935495 systemd-resolved[276]: Defaulting to hostname 'linux'. Sep 6 00:04:51.936567 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 6 00:04:51.939215 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 6 00:04:51.991967 kernel: SCSI subsystem initialized Sep 6 00:04:51.996949 kernel: Loading iSCSI transport class v2.0-870. Sep 6 00:04:52.004962 kernel: iscsi: registered transport (tcp) Sep 6 00:04:52.017960 kernel: iscsi: registered transport (qla4xxx) Sep 6 00:04:52.017979 kernel: QLogic iSCSI HBA Driver Sep 6 00:04:52.062798 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 6 00:04:52.081117 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 6 00:04:52.097003 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 6 00:04:52.098022 kernel: device-mapper: uevent: version 1.0.3 Sep 6 00:04:52.098042 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 6 00:04:52.143963 kernel: raid6: neonx8 gen() 15769 MB/s Sep 6 00:04:52.160950 kernel: raid6: neonx4 gen() 15672 MB/s Sep 6 00:04:52.177947 kernel: raid6: neonx2 gen() 13259 MB/s Sep 6 00:04:52.194946 kernel: raid6: neonx1 gen() 10516 MB/s Sep 6 00:04:52.211946 kernel: raid6: int64x8 gen() 6955 MB/s Sep 6 00:04:52.228946 kernel: raid6: int64x4 gen() 7328 MB/s Sep 6 00:04:52.245946 kernel: raid6: int64x2 gen() 6125 MB/s Sep 6 00:04:52.262946 kernel: raid6: int64x1 gen() 5056 MB/s Sep 6 00:04:52.262962 kernel: raid6: using algorithm neonx8 gen() 15769 MB/s Sep 6 00:04:52.279962 kernel: raid6: .... xor() 12055 MB/s, rmw enabled Sep 6 00:04:52.279998 kernel: raid6: using neon recovery algorithm Sep 6 00:04:52.284993 kernel: xor: measuring software checksum speed Sep 6 00:04:52.285011 kernel: 8regs : 19317 MB/sec Sep 6 00:04:52.286132 kernel: 32regs : 19664 MB/sec Sep 6 00:04:52.286144 kernel: arm64_neon : 27087 MB/sec Sep 6 00:04:52.286153 kernel: xor: using function: arm64_neon (27087 MB/sec) Sep 6 00:04:52.334963 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 6 00:04:52.349005 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 6 00:04:52.360159 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 00:04:52.371323 systemd-udevd[462]: Using default interface naming scheme 'v255'. Sep 6 00:04:52.374491 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 00:04:52.385147 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 6 00:04:52.397292 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Sep 6 00:04:52.426288 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 6 00:04:52.441155 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 6 00:04:52.483727 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 00:04:52.494522 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 6 00:04:52.508589 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 6 00:04:52.510353 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 6 00:04:52.511833 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 00:04:52.514487 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 6 00:04:52.524106 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 6 00:04:52.529960 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 6 00:04:52.538724 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 6 00:04:52.537220 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 6 00:04:52.550417 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:04:52.550589 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:04:52.557081 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 6 00:04:52.557105 kernel: GPT:9289727 != 19775487 Sep 6 00:04:52.557114 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 6 00:04:52.557124 kernel: GPT:9289727 != 19775487 Sep 6 00:04:52.557132 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 6 00:04:52.557141 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:04:52.558813 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 6 00:04:52.561211 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:04:52.561292 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:04:52.563115 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:04:52.573123 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:04:52.577965 kernel: BTRFS: device fsid 045c118e-b098-46f0-884a-43665575c70e devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (509) Sep 6 00:04:52.578949 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (506) Sep 6 00:04:52.581584 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 6 00:04:52.590908 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:04:52.600723 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 6 00:04:52.605635 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 6 00:04:52.606787 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 6 00:04:52.612166 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 6 00:04:52.625123 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 6 00:04:52.626834 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 6 00:04:52.631970 disk-uuid[552]: Primary Header is updated. Sep 6 00:04:52.631970 disk-uuid[552]: Secondary Entries is updated. Sep 6 00:04:52.631970 disk-uuid[552]: Secondary Header is updated. Sep 6 00:04:52.637961 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:04:52.644036 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:04:52.647968 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:04:52.650009 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:04:53.648278 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 6 00:04:53.648337 disk-uuid[553]: The operation has completed successfully. Sep 6 00:04:53.675435 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 6 00:04:53.675539 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 6 00:04:53.693090 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 6 00:04:53.696385 sh[577]: Success Sep 6 00:04:53.707957 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 6 00:04:53.746399 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 6 00:04:53.748118 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 6 00:04:53.749985 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 6 00:04:53.759203 kernel: BTRFS info (device dm-0): first mount of filesystem 045c118e-b098-46f0-884a-43665575c70e Sep 6 00:04:53.759252 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:04:53.759263 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 6 00:04:53.761092 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 6 00:04:53.761106 kernel: BTRFS info (device dm-0): using free space tree Sep 6 00:04:53.764433 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 6 00:04:53.765583 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 6 00:04:53.766297 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 6 00:04:53.768411 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 6 00:04:53.779585 kernel: BTRFS info (device vda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:04:53.779623 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:04:53.779633 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:04:53.783717 kernel: BTRFS info (device vda6): auto enabling async discard Sep 6 00:04:53.791163 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 6 00:04:53.792962 kernel: BTRFS info (device vda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:04:53.798927 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 6 00:04:53.805143 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 6 00:04:53.859705 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 6 00:04:53.868100 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 6 00:04:53.883582 ignition[679]: Ignition 2.19.0 Sep 6 00:04:53.883593 ignition[679]: Stage: fetch-offline Sep 6 00:04:53.883630 ignition[679]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:04:53.883639 ignition[679]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:04:53.883802 ignition[679]: parsed url from cmdline: "" Sep 6 00:04:53.883805 ignition[679]: no config URL provided Sep 6 00:04:53.883809 ignition[679]: reading system config file "/usr/lib/ignition/user.ign" Sep 6 00:04:53.883816 ignition[679]: no config at "/usr/lib/ignition/user.ign" Sep 6 00:04:53.883839 ignition[679]: op(1): [started] loading QEMU firmware config module Sep 6 00:04:53.889453 systemd-networkd[765]: lo: Link UP Sep 6 00:04:53.883848 ignition[679]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 6 00:04:53.889457 systemd-networkd[765]: lo: Gained carrier Sep 6 00:04:53.889783 ignition[679]: op(1): [finished] loading QEMU firmware config module Sep 6 00:04:53.890292 systemd-networkd[765]: Enumeration completed Sep 6 00:04:53.890884 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 6 00:04:53.892193 systemd[1]: Reached target network.target - Network. Sep 6 00:04:53.893553 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:04:53.893557 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:04:53.894465 systemd-networkd[765]: eth0: Link UP Sep 6 00:04:53.894469 systemd-networkd[765]: eth0: Gained carrier Sep 6 00:04:53.894476 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:04:53.908984 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:04:53.939700 ignition[679]: parsing config with SHA512: 6c9b91cf21b87c61d1b69e1de576c52020fa0e2d5ebd25b53f3cc3884a06cf44dd8c1d08ac0e1497b5a52d87438491b0cc956b241bd2f14a13aea42f3576cc89 Sep 6 00:04:53.944759 unknown[679]: fetched base config from "system" Sep 6 00:04:53.944776 unknown[679]: fetched user config from "qemu" Sep 6 00:04:53.945474 ignition[679]: fetch-offline: fetch-offline passed Sep 6 00:04:53.947171 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 6 00:04:53.945576 ignition[679]: Ignition finished successfully Sep 6 00:04:53.948291 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 6 00:04:53.954123 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 6 00:04:53.965424 ignition[771]: Ignition 2.19.0 Sep 6 00:04:53.965435 ignition[771]: Stage: kargs Sep 6 00:04:53.965612 ignition[771]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:04:53.965623 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:04:53.966478 ignition[771]: kargs: kargs passed Sep 6 00:04:53.966524 ignition[771]: Ignition finished successfully Sep 6 00:04:53.968557 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 6 00:04:53.979091 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 6 00:04:53.989914 ignition[780]: Ignition 2.19.0 Sep 6 00:04:53.989928 ignition[780]: Stage: disks Sep 6 00:04:53.990114 ignition[780]: no configs at "/usr/lib/ignition/base.d" Sep 6 00:04:53.990124 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:04:53.992306 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 6 00:04:53.991037 ignition[780]: disks: disks passed Sep 6 00:04:53.993617 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 6 00:04:53.991088 ignition[780]: Ignition finished successfully Sep 6 00:04:53.994834 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 6 00:04:53.996105 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 6 00:04:53.997530 systemd[1]: Reached target sysinit.target - System Initialization. Sep 6 00:04:53.998775 systemd[1]: Reached target basic.target - Basic System. Sep 6 00:04:54.008089 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 6 00:04:54.025164 systemd-resolved[276]: Detected conflict on linux IN A 10.0.0.93 Sep 6 00:04:54.025201 systemd-resolved[276]: Hostname conflict, changing published hostname from 'linux' to 'linux5'. Sep 6 00:04:54.027511 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 6 00:04:54.029819 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 6 00:04:54.038081 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 6 00:04:54.080955 kernel: EXT4-fs (vda9): mounted filesystem 72e55cb0-8368-4871-a3a0-8637412e72e8 r/w with ordered data mode. Quota mode: none. Sep 6 00:04:54.081710 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 6 00:04:54.082843 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 6 00:04:54.094029 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 6 00:04:54.096188 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 6 00:04:54.097100 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 6 00:04:54.097142 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 6 00:04:54.097164 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 6 00:04:54.104419 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 6 00:04:54.106739 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 6 00:04:54.112989 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (798) Sep 6 00:04:54.113027 kernel: BTRFS info (device vda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:04:54.113949 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:04:54.113971 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:04:54.116945 kernel: BTRFS info (device vda6): auto enabling async discard Sep 6 00:04:54.118037 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 6 00:04:54.147357 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Sep 6 00:04:54.151375 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Sep 6 00:04:54.154466 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Sep 6 00:04:54.158689 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Sep 6 00:04:54.236068 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 6 00:04:54.243030 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 6 00:04:54.244446 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 6 00:04:54.250949 kernel: BTRFS info (device vda6): last unmount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:04:54.267337 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 6 00:04:54.278159 ignition[914]: INFO : Ignition 2.19.0 Sep 6 00:04:54.278159 ignition[914]: INFO : Stage: mount Sep 6 00:04:54.279491 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:04:54.279491 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:04:54.279491 ignition[914]: INFO : mount: mount passed Sep 6 00:04:54.279491 ignition[914]: INFO : Ignition finished successfully Sep 6 00:04:54.282997 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 6 00:04:54.298042 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 6 00:04:54.758963 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 6 00:04:54.775174 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 6 00:04:54.781585 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (926) Sep 6 00:04:54.781626 kernel: BTRFS info (device vda6): first mount of filesystem 7395d4d5-ecb1-4acb-b5a4-3e846eddb858 Sep 6 00:04:54.781638 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 6 00:04:54.782314 kernel: BTRFS info (device vda6): using free space tree Sep 6 00:04:54.785967 kernel: BTRFS info (device vda6): auto enabling async discard Sep 6 00:04:54.787425 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 6 00:04:54.811033 ignition[943]: INFO : Ignition 2.19.0 Sep 6 00:04:54.811033 ignition[943]: INFO : Stage: files Sep 6 00:04:54.812358 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:04:54.812358 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:04:54.812358 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Sep 6 00:04:54.815072 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 6 00:04:54.815072 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 6 00:04:54.817209 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 6 00:04:54.817209 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 6 00:04:54.817209 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 6 00:04:54.816349 unknown[943]: wrote ssh authorized keys file for user: core Sep 6 00:04:54.821155 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 6 00:04:54.821155 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 6 00:04:54.870265 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 6 00:04:55.144479 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 6 00:04:55.144479 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:04:55.148649 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 6 00:04:55.349816 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 6 00:04:55.492202 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 6 00:04:55.493905 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 6 00:04:55.493905 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 6 00:04:55.493905 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:04:55.493905 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 6 00:04:55.493905 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:04:55.493905 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 6 00:04:55.493905 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:04:55.493905 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 6 00:04:55.493905 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:04:55.493905 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 6 00:04:55.493905 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 00:04:55.515870 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 00:04:55.515870 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 00:04:55.515870 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 6 00:04:55.544677 systemd-networkd[765]: eth0: Gained IPv6LL Sep 6 00:04:55.812881 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 6 00:04:56.280804 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 6 00:04:56.280804 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 6 00:04:56.283977 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:04:56.283977 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 6 00:04:56.283977 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 6 00:04:56.283977 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 6 00:04:56.283977 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:04:56.283977 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 6 00:04:56.283977 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 6 00:04:56.283977 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 6 00:04:56.304081 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:04:56.308656 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 6 00:04:56.311079 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 6 00:04:56.311079 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 6 00:04:56.311079 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 6 00:04:56.311079 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:04:56.311079 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 6 00:04:56.311079 ignition[943]: INFO : files: files passed Sep 6 00:04:56.311079 ignition[943]: INFO : Ignition finished successfully Sep 6 00:04:56.311437 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 6 00:04:56.325125 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 6 00:04:56.327833 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 6 00:04:56.330412 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 6 00:04:56.330536 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 6 00:04:56.335641 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Sep 6 00:04:56.338665 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:04:56.338665 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:04:56.341312 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 6 00:04:56.343337 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 6 00:04:56.344508 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 6 00:04:56.352087 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 6 00:04:56.371792 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 6 00:04:56.371890 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 6 00:04:56.373855 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 6 00:04:56.375361 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 6 00:04:56.376700 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 6 00:04:56.391091 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 6 00:04:56.403737 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 6 00:04:56.406002 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 6 00:04:56.418477 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 6 00:04:56.420103 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 00:04:56.421403 systemd[1]: Stopped target timers.target - Timer Units. Sep 6 00:04:56.422872 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 6 00:04:56.423025 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 6 00:04:56.425157 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 6 00:04:56.426895 systemd[1]: Stopped target basic.target - Basic System. Sep 6 00:04:56.428361 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 6 00:04:56.429836 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 6 00:04:56.431606 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 6 00:04:56.433279 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 6 00:04:56.434885 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 6 00:04:56.436645 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 6 00:04:56.438532 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 6 00:04:56.439987 systemd[1]: Stopped target swap.target - Swaps. Sep 6 00:04:56.441352 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 6 00:04:56.441475 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 6 00:04:56.443479 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 6 00:04:56.445144 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 00:04:56.446872 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 6 00:04:56.448016 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 00:04:56.449530 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 6 00:04:56.449654 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 6 00:04:56.451763 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 6 00:04:56.451878 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 6 00:04:56.453496 systemd[1]: Stopped target paths.target - Path Units. Sep 6 00:04:56.454641 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 6 00:04:56.458970 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 00:04:56.459971 systemd[1]: Stopped target slices.target - Slice Units. Sep 6 00:04:56.461634 systemd[1]: Stopped target sockets.target - Socket Units. Sep 6 00:04:56.462818 systemd[1]: iscsid.socket: Deactivated successfully. Sep 6 00:04:56.462910 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 6 00:04:56.464120 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 6 00:04:56.464200 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 6 00:04:56.465495 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 6 00:04:56.465614 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 6 00:04:56.466901 systemd[1]: ignition-files.service: Deactivated successfully. Sep 6 00:04:56.467019 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 6 00:04:56.475113 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 6 00:04:56.475799 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 6 00:04:56.475920 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 00:04:56.478167 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 6 00:04:56.479635 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 6 00:04:56.479749 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 00:04:56.481296 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 6 00:04:56.481447 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 6 00:04:56.487274 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 6 00:04:56.487362 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 6 00:04:56.492075 ignition[998]: INFO : Ignition 2.19.0 Sep 6 00:04:56.492075 ignition[998]: INFO : Stage: umount Sep 6 00:04:56.492075 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 6 00:04:56.492075 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 6 00:04:56.492075 ignition[998]: INFO : umount: umount passed Sep 6 00:04:56.492075 ignition[998]: INFO : Ignition finished successfully Sep 6 00:04:56.491227 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 6 00:04:56.491327 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 6 00:04:56.493604 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 6 00:04:56.494010 systemd[1]: Stopped target network.target - Network. Sep 6 00:04:56.496964 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 6 00:04:56.497033 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 6 00:04:56.498602 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 6 00:04:56.498646 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 6 00:04:56.501963 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 6 00:04:56.502015 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 6 00:04:56.503157 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 6 00:04:56.503195 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 6 00:04:56.505895 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 6 00:04:56.507168 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 6 00:04:56.517981 systemd-networkd[765]: eth0: DHCPv6 lease lost Sep 6 00:04:56.518638 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 6 00:04:56.518773 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 6 00:04:56.520726 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 6 00:04:56.520848 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 6 00:04:56.522585 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 6 00:04:56.522639 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 6 00:04:56.534046 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 6 00:04:56.534723 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 6 00:04:56.534780 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 6 00:04:56.536783 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:04:56.536829 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:04:56.538384 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 6 00:04:56.538430 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 6 00:04:56.541885 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 6 00:04:56.541990 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 00:04:56.543855 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 00:04:56.555218 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 6 00:04:56.555338 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 6 00:04:56.559276 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 6 00:04:56.559412 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 00:04:56.561268 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 6 00:04:56.561350 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 6 00:04:56.563122 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 6 00:04:56.563184 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 6 00:04:56.564062 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 6 00:04:56.564094 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 00:04:56.564845 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 6 00:04:56.564891 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 6 00:04:56.566576 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 6 00:04:56.566624 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 6 00:04:56.568744 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 6 00:04:56.568801 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 6 00:04:56.571201 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 6 00:04:56.571249 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 6 00:04:56.585156 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 6 00:04:56.585983 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 6 00:04:56.586044 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 00:04:56.587763 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 6 00:04:56.587810 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:04:56.593322 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 6 00:04:56.593455 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 6 00:04:56.596213 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 6 00:04:56.597715 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 6 00:04:56.608694 systemd[1]: Switching root. Sep 6 00:04:56.640150 systemd-journald[237]: Journal stopped Sep 6 00:04:57.407337 systemd-journald[237]: Received SIGTERM from PID 1 (systemd). Sep 6 00:04:57.407395 kernel: SELinux: policy capability network_peer_controls=1 Sep 6 00:04:57.407407 kernel: SELinux: policy capability open_perms=1 Sep 6 00:04:57.407424 kernel: SELinux: policy capability extended_socket_class=1 Sep 6 00:04:57.407433 kernel: SELinux: policy capability always_check_network=0 Sep 6 00:04:57.407442 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 6 00:04:57.407452 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 6 00:04:57.407467 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 6 00:04:57.407476 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 6 00:04:57.407486 systemd[1]: Successfully loaded SELinux policy in 37.837ms. Sep 6 00:04:57.407502 kernel: audit: type=1403 audit(1757117096.833:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 6 00:04:57.407513 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.888ms. Sep 6 00:04:57.407537 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 6 00:04:57.407550 systemd[1]: Detected virtualization kvm. Sep 6 00:04:57.407562 systemd[1]: Detected architecture arm64. Sep 6 00:04:57.407572 systemd[1]: Detected first boot. Sep 6 00:04:57.407582 systemd[1]: Initializing machine ID from VM UUID. Sep 6 00:04:57.407593 zram_generator::config[1044]: No configuration found. Sep 6 00:04:57.407604 systemd[1]: Populated /etc with preset unit settings. Sep 6 00:04:57.407615 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 6 00:04:57.407627 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 6 00:04:57.407638 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 6 00:04:57.407648 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 6 00:04:57.407659 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 6 00:04:57.407670 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 6 00:04:57.407680 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 6 00:04:57.407691 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 6 00:04:57.407702 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 6 00:04:57.407713 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 6 00:04:57.407725 systemd[1]: Created slice user.slice - User and Session Slice. Sep 6 00:04:57.407736 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 6 00:04:57.407746 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 6 00:04:57.407757 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 6 00:04:57.407768 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 6 00:04:57.407779 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 6 00:04:57.407790 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 6 00:04:57.407800 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 6 00:04:57.407810 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 6 00:04:57.407822 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 6 00:04:57.407833 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 6 00:04:57.407843 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 6 00:04:57.407854 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 6 00:04:57.407865 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 6 00:04:57.407875 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 6 00:04:57.407886 systemd[1]: Reached target slices.target - Slice Units. Sep 6 00:04:57.407896 systemd[1]: Reached target swap.target - Swaps. Sep 6 00:04:57.407912 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 6 00:04:57.407922 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 6 00:04:57.407981 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 6 00:04:57.407995 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 6 00:04:57.408006 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 6 00:04:57.408016 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 6 00:04:57.408027 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 6 00:04:57.408037 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 6 00:04:57.408047 systemd[1]: Mounting media.mount - External Media Directory... Sep 6 00:04:57.408061 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 6 00:04:57.408073 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 6 00:04:57.408084 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 6 00:04:57.408094 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 6 00:04:57.408105 systemd[1]: Reached target machines.target - Containers. Sep 6 00:04:57.408116 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 6 00:04:57.408126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:04:57.408137 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 6 00:04:57.408151 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 6 00:04:57.408162 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:04:57.408173 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 6 00:04:57.408184 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:04:57.408194 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 6 00:04:57.408205 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:04:57.408215 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 6 00:04:57.408226 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 6 00:04:57.408238 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 6 00:04:57.408248 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 6 00:04:57.408258 systemd[1]: Stopped systemd-fsck-usr.service. Sep 6 00:04:57.408268 kernel: loop: module loaded Sep 6 00:04:57.408278 kernel: fuse: init (API version 7.39) Sep 6 00:04:57.408288 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 6 00:04:57.408299 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 6 00:04:57.408310 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 6 00:04:57.408321 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 6 00:04:57.408333 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 6 00:04:57.408343 kernel: ACPI: bus type drm_connector registered Sep 6 00:04:57.408372 systemd-journald[1111]: Collecting audit messages is disabled. Sep 6 00:04:57.408395 systemd[1]: verity-setup.service: Deactivated successfully. Sep 6 00:04:57.408408 systemd-journald[1111]: Journal started Sep 6 00:04:57.408430 systemd-journald[1111]: Runtime Journal (/run/log/journal/4ba2ed9da3104c37bb120e50c33a6a2c) is 5.9M, max 47.3M, 41.4M free. Sep 6 00:04:57.410368 systemd[1]: Stopped verity-setup.service. Sep 6 00:04:57.222221 systemd[1]: Queued start job for default target multi-user.target. Sep 6 00:04:57.242032 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 6 00:04:57.242396 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 6 00:04:57.413043 systemd[1]: Started systemd-journald.service - Journal Service. Sep 6 00:04:57.413699 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 6 00:04:57.414837 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 6 00:04:57.416264 systemd[1]: Mounted media.mount - External Media Directory. Sep 6 00:04:57.417133 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 6 00:04:57.418112 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 6 00:04:57.419103 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 6 00:04:57.420978 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 6 00:04:57.422271 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 6 00:04:57.423609 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 6 00:04:57.423836 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 6 00:04:57.425179 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:04:57.425425 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:04:57.426674 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:04:57.426895 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 6 00:04:57.428190 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:04:57.428418 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:04:57.429733 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 6 00:04:57.430203 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 6 00:04:57.431463 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:04:57.431699 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:04:57.433000 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 6 00:04:57.434210 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 6 00:04:57.435608 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 6 00:04:57.447565 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 6 00:04:57.457035 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 6 00:04:57.458926 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 6 00:04:57.459782 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 6 00:04:57.459813 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 6 00:04:57.461626 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 6 00:04:57.463809 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 6 00:04:57.466060 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 6 00:04:57.467001 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:04:57.468767 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 6 00:04:57.470585 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 6 00:04:57.471544 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:04:57.473107 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 6 00:04:57.474166 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 6 00:04:57.478110 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 00:04:57.480179 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 6 00:04:57.480904 systemd-journald[1111]: Time spent on flushing to /var/log/journal/4ba2ed9da3104c37bb120e50c33a6a2c is 21.798ms for 860 entries. Sep 6 00:04:57.480904 systemd-journald[1111]: System Journal (/var/log/journal/4ba2ed9da3104c37bb120e50c33a6a2c) is 8.0M, max 195.6M, 187.6M free. Sep 6 00:04:57.516623 systemd-journald[1111]: Received client request to flush runtime journal. Sep 6 00:04:57.516673 kernel: loop0: detected capacity change from 0 to 211168 Sep 6 00:04:57.516696 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 6 00:04:57.488116 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 6 00:04:57.490295 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 6 00:04:57.491409 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 6 00:04:57.492673 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 6 00:04:57.493859 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 6 00:04:57.500256 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 6 00:04:57.504501 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 6 00:04:57.514225 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 6 00:04:57.518205 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 6 00:04:57.522445 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 6 00:04:57.531146 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:04:57.538836 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 6 00:04:57.539330 udevadm[1168]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 6 00:04:57.543045 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 6 00:04:57.545593 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 6 00:04:57.556171 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 6 00:04:57.572278 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Sep 6 00:04:57.572299 systemd-tmpfiles[1174]: ACLs are not supported, ignoring. Sep 6 00:04:57.574960 kernel: loop1: detected capacity change from 0 to 114328 Sep 6 00:04:57.582491 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 6 00:04:57.625965 kernel: loop2: detected capacity change from 0 to 114432 Sep 6 00:04:57.658964 kernel: loop3: detected capacity change from 0 to 211168 Sep 6 00:04:57.665954 kernel: loop4: detected capacity change from 0 to 114328 Sep 6 00:04:57.676969 kernel: loop5: detected capacity change from 0 to 114432 Sep 6 00:04:57.682676 (sd-merge)[1181]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 6 00:04:57.683985 (sd-merge)[1181]: Merged extensions into '/usr'. Sep 6 00:04:57.687180 systemd[1]: Reloading requested from client PID 1155 ('systemd-sysext') (unit systemd-sysext.service)... Sep 6 00:04:57.687196 systemd[1]: Reloading... Sep 6 00:04:57.722994 zram_generator::config[1205]: No configuration found. Sep 6 00:04:57.791187 ldconfig[1150]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 6 00:04:57.844090 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:04:57.881353 systemd[1]: Reloading finished in 193 ms. Sep 6 00:04:57.908615 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 6 00:04:57.909791 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 6 00:04:57.929120 systemd[1]: Starting ensure-sysext.service... Sep 6 00:04:57.934569 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 6 00:04:57.938648 systemd[1]: Reloading requested from client PID 1241 ('systemctl') (unit ensure-sysext.service)... Sep 6 00:04:57.938664 systemd[1]: Reloading... Sep 6 00:04:57.956511 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 6 00:04:57.957274 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 6 00:04:57.957982 systemd-tmpfiles[1242]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 6 00:04:57.958204 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Sep 6 00:04:57.958248 systemd-tmpfiles[1242]: ACLs are not supported, ignoring. Sep 6 00:04:57.961842 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Sep 6 00:04:57.961854 systemd-tmpfiles[1242]: Skipping /boot Sep 6 00:04:57.968744 systemd-tmpfiles[1242]: Detected autofs mount point /boot during canonicalization of boot. Sep 6 00:04:57.968760 systemd-tmpfiles[1242]: Skipping /boot Sep 6 00:04:57.991015 zram_generator::config[1265]: No configuration found. Sep 6 00:04:58.074512 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:04:58.112454 systemd[1]: Reloading finished in 173 ms. Sep 6 00:04:58.130174 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 6 00:04:58.143361 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 6 00:04:58.150810 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 6 00:04:58.153204 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 6 00:04:58.155359 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 6 00:04:58.160227 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 6 00:04:58.165290 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 6 00:04:58.170253 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 6 00:04:58.173652 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:04:58.175278 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:04:58.179069 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:04:58.184067 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:04:58.184878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:04:58.186459 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 6 00:04:58.188131 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 6 00:04:58.189770 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:04:58.189895 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:04:58.191393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:04:58.191551 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:04:58.193210 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:04:58.193328 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:04:58.200716 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:04:58.202069 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:04:58.206298 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 6 00:04:58.207691 systemd-udevd[1316]: Using default interface naming scheme 'v255'. Sep 6 00:04:58.213256 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 6 00:04:58.214245 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:04:58.216923 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 6 00:04:58.218856 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:04:58.219037 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:04:58.226976 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 6 00:04:58.234156 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 6 00:04:58.236115 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 6 00:04:58.238747 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 6 00:04:58.240466 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 6 00:04:58.240604 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 6 00:04:58.253018 systemd[1]: Finished ensure-sysext.service. Sep 6 00:04:58.254187 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 6 00:04:58.254347 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 6 00:04:58.255418 augenrules[1366]: No rules Sep 6 00:04:58.270570 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 6 00:04:58.274128 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 6 00:04:58.277758 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 6 00:04:58.278183 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 6 00:04:58.283027 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1358) Sep 6 00:04:58.293236 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 6 00:04:58.298158 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 6 00:04:58.300329 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 6 00:04:58.304167 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 6 00:04:58.306017 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 6 00:04:58.306852 systemd-resolved[1309]: Positive Trust Anchors: Sep 6 00:04:58.306913 systemd-resolved[1309]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 6 00:04:58.307000 systemd-resolved[1309]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 6 00:04:58.309746 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 6 00:04:58.310845 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 6 00:04:58.311337 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 6 00:04:58.311527 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 6 00:04:58.312701 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 6 00:04:58.313279 systemd-resolved[1309]: Defaulting to hostname 'linux'. Sep 6 00:04:58.315069 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 6 00:04:58.316048 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 6 00:04:58.325325 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 6 00:04:58.327192 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 6 00:04:58.344855 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 6 00:04:58.348140 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 6 00:04:58.383265 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 6 00:04:58.393046 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 6 00:04:58.395810 systemd-networkd[1383]: lo: Link UP Sep 6 00:04:58.395823 systemd-networkd[1383]: lo: Gained carrier Sep 6 00:04:58.396190 systemd[1]: Reached target time-set.target - System Time Set. Sep 6 00:04:58.397322 systemd-networkd[1383]: Enumeration completed Sep 6 00:04:58.398199 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:04:58.398208 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 6 00:04:58.399312 systemd-networkd[1383]: eth0: Link UP Sep 6 00:04:58.399318 systemd-networkd[1383]: eth0: Gained carrier Sep 6 00:04:58.399332 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 6 00:04:58.404176 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 6 00:04:58.405453 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 6 00:04:58.408145 systemd[1]: Reached target network.target - Network. Sep 6 00:04:58.410120 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 6 00:04:58.412008 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.93/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 6 00:04:58.413652 systemd-timesyncd[1384]: Network configuration changed, trying to establish connection. Sep 6 00:04:58.414291 systemd-timesyncd[1384]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 6 00:04:58.414335 systemd-timesyncd[1384]: Initial clock synchronization to Sat 2025-09-06 00:04:58.022074 UTC. Sep 6 00:04:58.415663 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 6 00:04:58.418200 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 6 00:04:58.432722 lvm[1400]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:04:58.442723 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 6 00:04:58.454453 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 6 00:04:58.455751 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 6 00:04:58.456734 systemd[1]: Reached target sysinit.target - System Initialization. Sep 6 00:04:58.457680 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 6 00:04:58.458706 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 6 00:04:58.459872 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 6 00:04:58.460844 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 6 00:04:58.461919 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 6 00:04:58.462828 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 6 00:04:58.462860 systemd[1]: Reached target paths.target - Path Units. Sep 6 00:04:58.463593 systemd[1]: Reached target timers.target - Timer Units. Sep 6 00:04:58.465102 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 6 00:04:58.467250 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 6 00:04:58.475902 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 6 00:04:58.477861 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 6 00:04:58.479294 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 6 00:04:58.480243 systemd[1]: Reached target sockets.target - Socket Units. Sep 6 00:04:58.480971 systemd[1]: Reached target basic.target - Basic System. Sep 6 00:04:58.481689 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 6 00:04:58.481724 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 6 00:04:58.482651 systemd[1]: Starting containerd.service - containerd container runtime... Sep 6 00:04:58.484465 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 6 00:04:58.486782 lvm[1408]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 6 00:04:58.487115 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 6 00:04:58.491130 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 6 00:04:58.493086 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 6 00:04:58.494195 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 6 00:04:58.495191 jq[1411]: false Sep 6 00:04:58.496630 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 6 00:04:58.498671 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 6 00:04:58.505150 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 6 00:04:58.509701 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 6 00:04:58.511423 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 6 00:04:58.511825 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 6 00:04:58.512047 dbus-daemon[1410]: [system] SELinux support is enabled Sep 6 00:04:58.512430 systemd[1]: Starting update-engine.service - Update Engine... Sep 6 00:04:58.515052 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 6 00:04:58.516819 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 6 00:04:58.522038 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 6 00:04:58.524035 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 6 00:04:58.526295 extend-filesystems[1412]: Found loop3 Sep 6 00:04:58.526295 extend-filesystems[1412]: Found loop4 Sep 6 00:04:58.526295 extend-filesystems[1412]: Found loop5 Sep 6 00:04:58.526295 extend-filesystems[1412]: Found vda Sep 6 00:04:58.526295 extend-filesystems[1412]: Found vda1 Sep 6 00:04:58.526295 extend-filesystems[1412]: Found vda2 Sep 6 00:04:58.526295 extend-filesystems[1412]: Found vda3 Sep 6 00:04:58.526295 extend-filesystems[1412]: Found usr Sep 6 00:04:58.526295 extend-filesystems[1412]: Found vda4 Sep 6 00:04:58.526295 extend-filesystems[1412]: Found vda6 Sep 6 00:04:58.526295 extend-filesystems[1412]: Found vda7 Sep 6 00:04:58.526295 extend-filesystems[1412]: Found vda9 Sep 6 00:04:58.526295 extend-filesystems[1412]: Checking size of /dev/vda9 Sep 6 00:04:58.524897 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 6 00:04:58.525207 systemd[1]: motdgen.service: Deactivated successfully. Sep 6 00:04:58.525383 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 6 00:04:58.551485 jq[1425]: true Sep 6 00:04:58.529373 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 6 00:04:58.529533 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 6 00:04:58.537318 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 6 00:04:58.537367 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 6 00:04:58.541438 (ntainerd)[1433]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 6 00:04:58.541820 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 6 00:04:58.541842 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 6 00:04:58.558454 update_engine[1423]: I20250906 00:04:58.551764 1423 main.cc:92] Flatcar Update Engine starting Sep 6 00:04:58.558454 update_engine[1423]: I20250906 00:04:58.555032 1423 update_check_scheduler.cc:74] Next update check in 9m24s Sep 6 00:04:58.566640 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 6 00:04:58.566663 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1344) Sep 6 00:04:58.566754 extend-filesystems[1412]: Resized partition /dev/vda9 Sep 6 00:04:58.567849 jq[1434]: true Sep 6 00:04:58.554335 systemd[1]: Started update-engine.service - Update Engine. Sep 6 00:04:58.568151 extend-filesystems[1448]: resize2fs 1.47.1 (20-May-2024) Sep 6 00:04:58.565162 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 6 00:04:58.571925 tar[1429]: linux-arm64/LICENSE Sep 6 00:04:58.571925 tar[1429]: linux-arm64/helm Sep 6 00:04:58.574499 systemd-logind[1422]: Watching system buttons on /dev/input/event0 (Power Button) Sep 6 00:04:58.574773 systemd-logind[1422]: New seat seat0. Sep 6 00:04:58.601713 systemd[1]: Started systemd-logind.service - User Login Management. Sep 6 00:04:58.605983 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 6 00:04:58.622454 extend-filesystems[1448]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 6 00:04:58.622454 extend-filesystems[1448]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 6 00:04:58.622454 extend-filesystems[1448]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 6 00:04:58.626476 extend-filesystems[1412]: Resized filesystem in /dev/vda9 Sep 6 00:04:58.625049 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 6 00:04:58.627167 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 6 00:04:58.639614 bash[1473]: Updated "/home/core/.ssh/authorized_keys" Sep 6 00:04:58.641356 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 6 00:04:58.642895 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 6 00:04:58.655970 locksmithd[1447]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 6 00:04:58.709787 containerd[1433]: time="2025-09-06T00:04:58.709694920Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 6 00:04:58.737038 containerd[1433]: time="2025-09-06T00:04:58.736177480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:04:58.737671 containerd[1433]: time="2025-09-06T00:04:58.737631120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.103-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:04:58.737671 containerd[1433]: time="2025-09-06T00:04:58.737667200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 6 00:04:58.737740 containerd[1433]: time="2025-09-06T00:04:58.737683320Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 6 00:04:58.737862 containerd[1433]: time="2025-09-06T00:04:58.737840680Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 6 00:04:58.737893 containerd[1433]: time="2025-09-06T00:04:58.737863400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 6 00:04:58.737945 containerd[1433]: time="2025-09-06T00:04:58.737921720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:04:58.737972 containerd[1433]: time="2025-09-06T00:04:58.737952720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:04:58.738141 containerd[1433]: time="2025-09-06T00:04:58.738117960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:04:58.738141 containerd[1433]: time="2025-09-06T00:04:58.738139400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 6 00:04:58.738192 containerd[1433]: time="2025-09-06T00:04:58.738152520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:04:58.738192 containerd[1433]: time="2025-09-06T00:04:58.738162840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 6 00:04:58.738248 containerd[1433]: time="2025-09-06T00:04:58.738231080Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:04:58.738436 containerd[1433]: time="2025-09-06T00:04:58.738416200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 6 00:04:58.738547 containerd[1433]: time="2025-09-06T00:04:58.738526440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 6 00:04:58.738573 containerd[1433]: time="2025-09-06T00:04:58.738549400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 6 00:04:58.738644 containerd[1433]: time="2025-09-06T00:04:58.738626040Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 6 00:04:58.738688 containerd[1433]: time="2025-09-06T00:04:58.738674160Z" level=info msg="metadata content store policy set" policy=shared Sep 6 00:04:58.741857 containerd[1433]: time="2025-09-06T00:04:58.741824040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 6 00:04:58.741916 containerd[1433]: time="2025-09-06T00:04:58.741876560Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 6 00:04:58.741916 containerd[1433]: time="2025-09-06T00:04:58.741892840Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 6 00:04:58.741916 containerd[1433]: time="2025-09-06T00:04:58.741907720Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 6 00:04:58.741994 containerd[1433]: time="2025-09-06T00:04:58.741921800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 6 00:04:58.742103 containerd[1433]: time="2025-09-06T00:04:58.742075880Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 6 00:04:58.742316 containerd[1433]: time="2025-09-06T00:04:58.742297040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 6 00:04:58.744671 containerd[1433]: time="2025-09-06T00:04:58.742397920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 6 00:04:58.744671 containerd[1433]: time="2025-09-06T00:04:58.742417600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 6 00:04:58.744671 containerd[1433]: time="2025-09-06T00:04:58.742431680Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 6 00:04:58.744671 containerd[1433]: time="2025-09-06T00:04:58.742445120Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 6 00:04:58.744671 containerd[1433]: time="2025-09-06T00:04:58.742458320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 6 00:04:58.744671 containerd[1433]: time="2025-09-06T00:04:58.742470480Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 6 00:04:58.744671 containerd[1433]: time="2025-09-06T00:04:58.742483760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 6 00:04:58.744671 containerd[1433]: time="2025-09-06T00:04:58.742497400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 6 00:04:58.744671 containerd[1433]: time="2025-09-06T00:04:58.742509400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 6 00:04:58.744671 containerd[1433]: time="2025-09-06T00:04:58.742534760Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 6 00:04:58.744671 containerd[1433]: time="2025-09-06T00:04:58.742547440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 6 00:04:58.744671 containerd[1433]: time="2025-09-06T00:04:58.742571760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.744671 containerd[1433]: time="2025-09-06T00:04:58.742585760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.744671 containerd[1433]: time="2025-09-06T00:04:58.742597760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.744959 containerd[1433]: time="2025-09-06T00:04:58.742610720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.744959 containerd[1433]: time="2025-09-06T00:04:58.742624640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.744959 containerd[1433]: time="2025-09-06T00:04:58.742638520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.744959 containerd[1433]: time="2025-09-06T00:04:58.742651720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.744959 containerd[1433]: time="2025-09-06T00:04:58.742664680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.744959 containerd[1433]: time="2025-09-06T00:04:58.742678560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.744959 containerd[1433]: time="2025-09-06T00:04:58.742693040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.744959 containerd[1433]: time="2025-09-06T00:04:58.742704320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.744959 containerd[1433]: time="2025-09-06T00:04:58.742716520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.744959 containerd[1433]: time="2025-09-06T00:04:58.742729880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.744959 containerd[1433]: time="2025-09-06T00:04:58.742745680Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 6 00:04:58.744959 containerd[1433]: time="2025-09-06T00:04:58.742765920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.744959 containerd[1433]: time="2025-09-06T00:04:58.742778040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.744959 containerd[1433]: time="2025-09-06T00:04:58.742796520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 6 00:04:58.745198 containerd[1433]: time="2025-09-06T00:04:58.743441960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 6 00:04:58.745198 containerd[1433]: time="2025-09-06T00:04:58.743471440Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 6 00:04:58.745198 containerd[1433]: time="2025-09-06T00:04:58.743482560Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 6 00:04:58.745198 containerd[1433]: time="2025-09-06T00:04:58.743495360Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 6 00:04:58.745198 containerd[1433]: time="2025-09-06T00:04:58.743505440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.745198 containerd[1433]: time="2025-09-06T00:04:58.743529920Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 6 00:04:58.745198 containerd[1433]: time="2025-09-06T00:04:58.743543520Z" level=info msg="NRI interface is disabled by configuration." Sep 6 00:04:58.745198 containerd[1433]: time="2025-09-06T00:04:58.743563560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 6 00:04:58.745334 containerd[1433]: time="2025-09-06T00:04:58.743904480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 6 00:04:58.745334 containerd[1433]: time="2025-09-06T00:04:58.743985960Z" level=info msg="Connect containerd service" Sep 6 00:04:58.745334 containerd[1433]: time="2025-09-06T00:04:58.744020000Z" level=info msg="using legacy CRI server" Sep 6 00:04:58.745334 containerd[1433]: time="2025-09-06T00:04:58.744026760Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 6 00:04:58.745334 containerd[1433]: time="2025-09-06T00:04:58.744109920Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 6 00:04:58.746202 containerd[1433]: time="2025-09-06T00:04:58.746164440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:04:58.746406 containerd[1433]: time="2025-09-06T00:04:58.746364120Z" level=info msg="Start subscribing containerd event" Sep 6 00:04:58.746432 containerd[1433]: time="2025-09-06T00:04:58.746420640Z" level=info msg="Start recovering state" Sep 6 00:04:58.746535 containerd[1433]: time="2025-09-06T00:04:58.746507960Z" level=info msg="Start event monitor" Sep 6 00:04:58.746563 containerd[1433]: time="2025-09-06T00:04:58.746534680Z" level=info msg="Start snapshots syncer" Sep 6 00:04:58.746563 containerd[1433]: time="2025-09-06T00:04:58.746545680Z" level=info msg="Start cni network conf syncer for default" Sep 6 00:04:58.746611 containerd[1433]: time="2025-09-06T00:04:58.746584760Z" level=info msg="Start streaming server" Sep 6 00:04:58.746969 containerd[1433]: time="2025-09-06T00:04:58.746924760Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 6 00:04:58.747020 containerd[1433]: time="2025-09-06T00:04:58.747005480Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 6 00:04:58.747078 containerd[1433]: time="2025-09-06T00:04:58.747064280Z" level=info msg="containerd successfully booted in 0.038720s" Sep 6 00:04:58.747152 systemd[1]: Started containerd.service - containerd container runtime. Sep 6 00:04:58.960235 tar[1429]: linux-arm64/README.md Sep 6 00:04:58.975446 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 6 00:04:59.027982 sshd_keygen[1445]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 6 00:04:59.047330 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 6 00:04:59.058489 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 6 00:04:59.062844 systemd[1]: issuegen.service: Deactivated successfully. Sep 6 00:04:59.063038 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 6 00:04:59.067624 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 6 00:04:59.081761 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 6 00:04:59.095279 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 6 00:04:59.097207 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 6 00:04:59.098241 systemd[1]: Reached target getty.target - Login Prompts. Sep 6 00:04:59.893185 systemd-networkd[1383]: eth0: Gained IPv6LL Sep 6 00:04:59.895948 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 6 00:04:59.897379 systemd[1]: Reached target network-online.target - Network is Online. Sep 6 00:04:59.907339 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 6 00:04:59.909631 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:04:59.911592 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 6 00:04:59.931899 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 6 00:04:59.937276 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 6 00:04:59.937464 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 6 00:04:59.938741 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 6 00:05:00.466562 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:05:00.467796 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 6 00:05:00.470061 systemd[1]: Startup finished in 557ms (kernel) + 5.144s (initrd) + 3.674s (userspace) = 9.376s. Sep 6 00:05:00.470394 (kubelet)[1523]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 00:05:00.839559 kubelet[1523]: E0906 00:05:00.839452 1523 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:05:00.842627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:05:00.842817 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:05:04.050524 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 6 00:05:04.051596 systemd[1]: Started sshd@0-10.0.0.93:22-10.0.0.1:52912.service - OpenSSH per-connection server daemon (10.0.0.1:52912). Sep 6 00:05:04.096324 sshd[1536]: Accepted publickey for core from 10.0.0.1 port 52912 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:05:04.097730 sshd[1536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:04.105897 systemd-logind[1422]: New session 1 of user core. Sep 6 00:05:04.106885 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 6 00:05:04.116149 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 6 00:05:04.125997 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 6 00:05:04.128114 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 6 00:05:04.134267 (systemd)[1540]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 6 00:05:04.208668 systemd[1540]: Queued start job for default target default.target. Sep 6 00:05:04.221830 systemd[1540]: Created slice app.slice - User Application Slice. Sep 6 00:05:04.221859 systemd[1540]: Reached target paths.target - Paths. Sep 6 00:05:04.221871 systemd[1540]: Reached target timers.target - Timers. Sep 6 00:05:04.223092 systemd[1540]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 6 00:05:04.233445 systemd[1540]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 6 00:05:04.233503 systemd[1540]: Reached target sockets.target - Sockets. Sep 6 00:05:04.233515 systemd[1540]: Reached target basic.target - Basic System. Sep 6 00:05:04.233548 systemd[1540]: Reached target default.target - Main User Target. Sep 6 00:05:04.233571 systemd[1540]: Startup finished in 94ms. Sep 6 00:05:04.233851 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 6 00:05:04.235129 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 6 00:05:04.296207 systemd[1]: Started sshd@1-10.0.0.93:22-10.0.0.1:52924.service - OpenSSH per-connection server daemon (10.0.0.1:52924). Sep 6 00:05:04.328400 sshd[1551]: Accepted publickey for core from 10.0.0.1 port 52924 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:05:04.329666 sshd[1551]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:04.333527 systemd-logind[1422]: New session 2 of user core. Sep 6 00:05:04.344085 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 6 00:05:04.398849 sshd[1551]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:04.418102 systemd[1]: sshd@1-10.0.0.93:22-10.0.0.1:52924.service: Deactivated successfully. Sep 6 00:05:04.419379 systemd[1]: session-2.scope: Deactivated successfully. Sep 6 00:05:04.420515 systemd-logind[1422]: Session 2 logged out. Waiting for processes to exit. Sep 6 00:05:04.421611 systemd[1]: Started sshd@2-10.0.0.93:22-10.0.0.1:52932.service - OpenSSH per-connection server daemon (10.0.0.1:52932). Sep 6 00:05:04.422395 systemd-logind[1422]: Removed session 2. Sep 6 00:05:04.456056 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 52932 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:05:04.457172 sshd[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:04.460521 systemd-logind[1422]: New session 3 of user core. Sep 6 00:05:04.467093 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 6 00:05:04.513877 sshd[1558]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:04.525350 systemd[1]: sshd@2-10.0.0.93:22-10.0.0.1:52932.service: Deactivated successfully. Sep 6 00:05:04.526639 systemd[1]: session-3.scope: Deactivated successfully. Sep 6 00:05:04.529050 systemd-logind[1422]: Session 3 logged out. Waiting for processes to exit. Sep 6 00:05:04.530094 systemd[1]: Started sshd@3-10.0.0.93:22-10.0.0.1:52940.service - OpenSSH per-connection server daemon (10.0.0.1:52940). Sep 6 00:05:04.530779 systemd-logind[1422]: Removed session 3. Sep 6 00:05:04.568980 sshd[1565]: Accepted publickey for core from 10.0.0.1 port 52940 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:05:04.571337 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:04.575023 systemd-logind[1422]: New session 4 of user core. Sep 6 00:05:04.581100 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 6 00:05:04.632411 sshd[1565]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:04.644294 systemd[1]: sshd@3-10.0.0.93:22-10.0.0.1:52940.service: Deactivated successfully. Sep 6 00:05:04.645793 systemd[1]: session-4.scope: Deactivated successfully. Sep 6 00:05:04.648668 systemd-logind[1422]: Session 4 logged out. Waiting for processes to exit. Sep 6 00:05:04.666255 systemd[1]: Started sshd@4-10.0.0.93:22-10.0.0.1:52948.service - OpenSSH per-connection server daemon (10.0.0.1:52948). Sep 6 00:05:04.667950 systemd-logind[1422]: Removed session 4. Sep 6 00:05:04.697427 sshd[1572]: Accepted publickey for core from 10.0.0.1 port 52948 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:05:04.698923 sshd[1572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:04.702705 systemd-logind[1422]: New session 5 of user core. Sep 6 00:05:04.714051 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 6 00:05:04.767970 sudo[1575]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 6 00:05:04.768242 sudo[1575]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:05:04.783710 sudo[1575]: pam_unix(sudo:session): session closed for user root Sep 6 00:05:04.785355 sshd[1572]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:04.793311 systemd[1]: sshd@4-10.0.0.93:22-10.0.0.1:52948.service: Deactivated successfully. Sep 6 00:05:04.794857 systemd[1]: session-5.scope: Deactivated successfully. Sep 6 00:05:04.797085 systemd-logind[1422]: Session 5 logged out. Waiting for processes to exit. Sep 6 00:05:04.798282 systemd[1]: Started sshd@5-10.0.0.93:22-10.0.0.1:52954.service - OpenSSH per-connection server daemon (10.0.0.1:52954). Sep 6 00:05:04.798949 systemd-logind[1422]: Removed session 5. Sep 6 00:05:04.836778 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 52954 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:05:04.838081 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:04.842170 systemd-logind[1422]: New session 6 of user core. Sep 6 00:05:04.856082 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 6 00:05:04.906604 sudo[1584]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 6 00:05:04.906871 sudo[1584]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:05:04.909690 sudo[1584]: pam_unix(sudo:session): session closed for user root Sep 6 00:05:04.913869 sudo[1583]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 6 00:05:04.914165 sudo[1583]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:05:04.934351 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 6 00:05:04.939158 auditctl[1587]: No rules Sep 6 00:05:04.940017 systemd[1]: audit-rules.service: Deactivated successfully. Sep 6 00:05:04.940221 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 6 00:05:04.941984 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 6 00:05:04.969260 augenrules[1605]: No rules Sep 6 00:05:04.971971 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 6 00:05:04.974037 sudo[1583]: pam_unix(sudo:session): session closed for user root Sep 6 00:05:04.976319 sshd[1580]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:04.984199 systemd[1]: sshd@5-10.0.0.93:22-10.0.0.1:52954.service: Deactivated successfully. Sep 6 00:05:04.985390 systemd[1]: session-6.scope: Deactivated successfully. Sep 6 00:05:04.987516 systemd-logind[1422]: Session 6 logged out. Waiting for processes to exit. Sep 6 00:05:04.999301 systemd[1]: Started sshd@6-10.0.0.93:22-10.0.0.1:52960.service - OpenSSH per-connection server daemon (10.0.0.1:52960). Sep 6 00:05:05.001035 systemd-logind[1422]: Removed session 6. Sep 6 00:05:05.035575 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 52960 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:05:05.036888 sshd[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:05.043225 systemd-logind[1422]: New session 7 of user core. Sep 6 00:05:05.049288 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 6 00:05:05.099832 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 6 00:05:05.101032 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 6 00:05:05.415207 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 6 00:05:05.415409 (dockerd)[1636]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 6 00:05:05.643993 dockerd[1636]: time="2025-09-06T00:05:05.643905245Z" level=info msg="Starting up" Sep 6 00:05:05.792414 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1776437985-merged.mount: Deactivated successfully. Sep 6 00:05:05.812463 dockerd[1636]: time="2025-09-06T00:05:05.812420065Z" level=info msg="Loading containers: start." Sep 6 00:05:05.908961 kernel: Initializing XFRM netlink socket Sep 6 00:05:05.986630 systemd-networkd[1383]: docker0: Link UP Sep 6 00:05:06.005050 dockerd[1636]: time="2025-09-06T00:05:06.005007996Z" level=info msg="Loading containers: done." Sep 6 00:05:06.018324 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2611174090-merged.mount: Deactivated successfully. Sep 6 00:05:06.020139 dockerd[1636]: time="2025-09-06T00:05:06.020102661Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 6 00:05:06.020210 dockerd[1636]: time="2025-09-06T00:05:06.020195872Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 6 00:05:06.020310 dockerd[1636]: time="2025-09-06T00:05:06.020295320Z" level=info msg="Daemon has completed initialization" Sep 6 00:05:06.048633 dockerd[1636]: time="2025-09-06T00:05:06.048453513Z" level=info msg="API listen on /run/docker.sock" Sep 6 00:05:06.048675 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 6 00:05:06.663496 containerd[1433]: time="2025-09-06T00:05:06.663446642Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 6 00:05:07.429496 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2252546779.mount: Deactivated successfully. Sep 6 00:05:08.723064 containerd[1433]: time="2025-09-06T00:05:08.723009293Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:08.724559 containerd[1433]: time="2025-09-06T00:05:08.724385389Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 6 00:05:08.725641 containerd[1433]: time="2025-09-06T00:05:08.725268618Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:08.728465 containerd[1433]: time="2025-09-06T00:05:08.728432139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:08.729698 containerd[1433]: time="2025-09-06T00:05:08.729665136Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 2.066164433s" Sep 6 00:05:08.729806 containerd[1433]: time="2025-09-06T00:05:08.729789201Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 6 00:05:08.731579 containerd[1433]: time="2025-09-06T00:05:08.731546003Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 6 00:05:09.939494 containerd[1433]: time="2025-09-06T00:05:09.939433266Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:09.940594 containerd[1433]: time="2025-09-06T00:05:09.940561962Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 6 00:05:09.941965 containerd[1433]: time="2025-09-06T00:05:09.941523954Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:09.944619 containerd[1433]: time="2025-09-06T00:05:09.944589034Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:09.946097 containerd[1433]: time="2025-09-06T00:05:09.945981957Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.214399402s" Sep 6 00:05:09.946097 containerd[1433]: time="2025-09-06T00:05:09.946015953Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 6 00:05:09.946468 containerd[1433]: time="2025-09-06T00:05:09.946422443Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 6 00:05:11.093429 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 6 00:05:11.104443 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:05:11.208313 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:05:11.212022 (kubelet)[1854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 00:05:11.285426 containerd[1433]: time="2025-09-06T00:05:11.285376502Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:11.286070 containerd[1433]: time="2025-09-06T00:05:11.286040878Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 6 00:05:11.288846 containerd[1433]: time="2025-09-06T00:05:11.288808244Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:11.292354 containerd[1433]: time="2025-09-06T00:05:11.292319908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:11.295281 containerd[1433]: time="2025-09-06T00:05:11.294502703Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.348045684s" Sep 6 00:05:11.295281 containerd[1433]: time="2025-09-06T00:05:11.294538624Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 6 00:05:11.295504 containerd[1433]: time="2025-09-06T00:05:11.295473146Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 6 00:05:11.302573 kubelet[1854]: E0906 00:05:11.302540 1854 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:05:11.305638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:05:11.305809 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:05:12.389393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount771894631.mount: Deactivated successfully. Sep 6 00:05:12.748050 containerd[1433]: time="2025-09-06T00:05:12.747990493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:12.748706 containerd[1433]: time="2025-09-06T00:05:12.748669774Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 6 00:05:12.749547 containerd[1433]: time="2025-09-06T00:05:12.749493318Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:12.751890 containerd[1433]: time="2025-09-06T00:05:12.751724793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:12.752376 containerd[1433]: time="2025-09-06T00:05:12.752351452Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.456832113s" Sep 6 00:05:12.752528 containerd[1433]: time="2025-09-06T00:05:12.752432625Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 6 00:05:12.753095 containerd[1433]: time="2025-09-06T00:05:12.753070823Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 6 00:05:13.287607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1285601213.mount: Deactivated successfully. Sep 6 00:05:14.356518 containerd[1433]: time="2025-09-06T00:05:14.356466075Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:14.357428 containerd[1433]: time="2025-09-06T00:05:14.357192399Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 6 00:05:14.358248 containerd[1433]: time="2025-09-06T00:05:14.358204739Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:14.361702 containerd[1433]: time="2025-09-06T00:05:14.361661563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:14.363232 containerd[1433]: time="2025-09-06T00:05:14.363090569Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.60998574s" Sep 6 00:05:14.363232 containerd[1433]: time="2025-09-06T00:05:14.363127483Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 6 00:05:14.363683 containerd[1433]: time="2025-09-06T00:05:14.363642136Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 6 00:05:14.913084 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3068248654.mount: Deactivated successfully. Sep 6 00:05:14.917885 containerd[1433]: time="2025-09-06T00:05:14.917145239Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:14.918615 containerd[1433]: time="2025-09-06T00:05:14.918577463Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 6 00:05:14.919871 containerd[1433]: time="2025-09-06T00:05:14.919837315Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:14.922181 containerd[1433]: time="2025-09-06T00:05:14.922139308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:14.923365 containerd[1433]: time="2025-09-06T00:05:14.923332802Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 559.655381ms" Sep 6 00:05:14.923434 containerd[1433]: time="2025-09-06T00:05:14.923365862Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 6 00:05:14.924241 containerd[1433]: time="2025-09-06T00:05:14.924213776Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 6 00:05:15.380414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2263565751.mount: Deactivated successfully. Sep 6 00:05:17.068176 containerd[1433]: time="2025-09-06T00:05:17.068125812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:17.069271 containerd[1433]: time="2025-09-06T00:05:17.069238539Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 6 00:05:17.069956 containerd[1433]: time="2025-09-06T00:05:17.069838907Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:17.073748 containerd[1433]: time="2025-09-06T00:05:17.073712507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:17.075733 containerd[1433]: time="2025-09-06T00:05:17.075694367Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.151446757s" Sep 6 00:05:17.075789 containerd[1433]: time="2025-09-06T00:05:17.075740123Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 6 00:05:21.471510 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 6 00:05:21.481116 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:05:21.587041 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:05:21.590735 (kubelet)[2014]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 6 00:05:21.622950 kubelet[2014]: E0906 00:05:21.622891 2014 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 6 00:05:21.625638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 6 00:05:21.625794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 6 00:05:22.803574 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:05:22.812142 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:05:22.831430 systemd[1]: Reloading requested from client PID 2029 ('systemctl') (unit session-7.scope)... Sep 6 00:05:22.831444 systemd[1]: Reloading... Sep 6 00:05:22.899965 zram_generator::config[2068]: No configuration found. Sep 6 00:05:23.023256 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:05:23.078534 systemd[1]: Reloading finished in 246 ms. Sep 6 00:05:23.119895 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:05:23.121267 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:05:23.123650 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:05:23.123829 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:05:23.132275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:05:23.232261 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:05:23.235813 (kubelet)[2115]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 6 00:05:23.269843 kubelet[2115]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:05:23.269843 kubelet[2115]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:05:23.269843 kubelet[2115]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:05:23.270195 kubelet[2115]: I0906 00:05:23.269886 2115 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:05:24.027804 kubelet[2115]: I0906 00:05:24.027220 2115 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 6 00:05:24.027804 kubelet[2115]: I0906 00:05:24.027252 2115 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:05:24.027804 kubelet[2115]: I0906 00:05:24.027595 2115 server.go:956] "Client rotation is on, will bootstrap in background" Sep 6 00:05:24.042428 kubelet[2115]: E0906 00:05:24.042388 2115 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.93:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 6 00:05:24.042846 kubelet[2115]: I0906 00:05:24.042819 2115 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:05:24.048448 kubelet[2115]: E0906 00:05:24.048420 2115 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:05:24.048448 kubelet[2115]: I0906 00:05:24.048450 2115 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:05:24.050893 kubelet[2115]: I0906 00:05:24.050878 2115 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:05:24.052023 kubelet[2115]: I0906 00:05:24.051985 2115 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:05:24.052175 kubelet[2115]: I0906 00:05:24.052025 2115 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:05:24.052253 kubelet[2115]: I0906 00:05:24.052243 2115 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:05:24.052253 kubelet[2115]: I0906 00:05:24.052253 2115 container_manager_linux.go:303] "Creating device plugin manager" Sep 6 00:05:24.052454 kubelet[2115]: I0906 00:05:24.052428 2115 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:05:24.054966 kubelet[2115]: I0906 00:05:24.054948 2115 kubelet.go:480] "Attempting to sync node with API server" Sep 6 00:05:24.055049 kubelet[2115]: I0906 00:05:24.054973 2115 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:05:24.055049 kubelet[2115]: I0906 00:05:24.054999 2115 kubelet.go:386] "Adding apiserver pod source" Sep 6 00:05:24.055049 kubelet[2115]: I0906 00:05:24.055012 2115 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:05:24.058654 kubelet[2115]: I0906 00:05:24.057967 2115 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 6 00:05:24.058654 kubelet[2115]: E0906 00:05:24.058556 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 6 00:05:24.058654 kubelet[2115]: E0906 00:05:24.058555 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 6 00:05:24.058778 kubelet[2115]: I0906 00:05:24.058708 2115 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 6 00:05:24.058840 kubelet[2115]: W0906 00:05:24.058820 2115 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 6 00:05:24.061965 kubelet[2115]: I0906 00:05:24.061925 2115 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:05:24.062121 kubelet[2115]: I0906 00:05:24.061979 2115 server.go:1289] "Started kubelet" Sep 6 00:05:24.062121 kubelet[2115]: I0906 00:05:24.062035 2115 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:05:24.062416 kubelet[2115]: I0906 00:05:24.062387 2115 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:05:24.062757 kubelet[2115]: I0906 00:05:24.062561 2115 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:05:24.067239 kubelet[2115]: I0906 00:05:24.067218 2115 server.go:317] "Adding debug handlers to kubelet server" Sep 6 00:05:24.068424 kubelet[2115]: I0906 00:05:24.068398 2115 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:05:24.069015 kubelet[2115]: I0906 00:05:24.068839 2115 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:05:24.069594 kubelet[2115]: E0906 00:05:24.069572 2115 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:05:24.069656 kubelet[2115]: E0906 00:05:24.069610 2115 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:05:24.069656 kubelet[2115]: E0906 00:05:24.068304 2115 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.93:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.93:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186288b6f8780c68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-06 00:05:24.061957224 +0000 UTC m=+0.822784212,LastTimestamp:2025-09-06 00:05:24.061957224 +0000 UTC m=+0.822784212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 6 00:05:24.069656 kubelet[2115]: I0906 00:05:24.069637 2115 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:05:24.069756 kubelet[2115]: I0906 00:05:24.069644 2115 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:05:24.069756 kubelet[2115]: I0906 00:05:24.069747 2115 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:05:24.070198 kubelet[2115]: E0906 00:05:24.070168 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 6 00:05:24.070390 kubelet[2115]: I0906 00:05:24.070365 2115 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:05:24.070558 kubelet[2115]: E0906 00:05:24.070455 2115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="200ms" Sep 6 00:05:24.071715 kubelet[2115]: I0906 00:05:24.071695 2115 factory.go:223] Registration of the containerd container factory successfully Sep 6 00:05:24.071715 kubelet[2115]: I0906 00:05:24.071712 2115 factory.go:223] Registration of the systemd container factory successfully Sep 6 00:05:24.085057 kubelet[2115]: I0906 00:05:24.085035 2115 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:05:24.085371 kubelet[2115]: I0906 00:05:24.085170 2115 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:05:24.085371 kubelet[2115]: I0906 00:05:24.085195 2115 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:05:24.086625 kubelet[2115]: I0906 00:05:24.086591 2115 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 6 00:05:24.087580 kubelet[2115]: I0906 00:05:24.087502 2115 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 6 00:05:24.087580 kubelet[2115]: I0906 00:05:24.087525 2115 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 6 00:05:24.087580 kubelet[2115]: I0906 00:05:24.087542 2115 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:05:24.087580 kubelet[2115]: I0906 00:05:24.087548 2115 kubelet.go:2436] "Starting kubelet main sync loop" Sep 6 00:05:24.087795 kubelet[2115]: E0906 00:05:24.087581 2115 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:05:24.159239 kubelet[2115]: I0906 00:05:24.159190 2115 policy_none.go:49] "None policy: Start" Sep 6 00:05:24.159239 kubelet[2115]: I0906 00:05:24.159227 2115 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:05:24.159239 kubelet[2115]: I0906 00:05:24.159240 2115 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:05:24.159641 kubelet[2115]: E0906 00:05:24.159619 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 6 00:05:24.164989 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 6 00:05:24.170663 kubelet[2115]: E0906 00:05:24.170604 2115 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:05:24.177235 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 6 00:05:24.180188 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 6 00:05:24.188521 kubelet[2115]: E0906 00:05:24.188492 2115 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 6 00:05:24.189793 kubelet[2115]: E0906 00:05:24.189768 2115 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 6 00:05:24.189993 kubelet[2115]: I0906 00:05:24.189970 2115 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:05:24.190025 kubelet[2115]: I0906 00:05:24.189988 2115 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:05:24.190878 kubelet[2115]: I0906 00:05:24.190692 2115 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:05:24.191380 kubelet[2115]: E0906 00:05:24.191304 2115 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:05:24.191443 kubelet[2115]: E0906 00:05:24.191405 2115 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 6 00:05:24.271724 kubelet[2115]: E0906 00:05:24.271680 2115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="400ms" Sep 6 00:05:24.291978 kubelet[2115]: I0906 00:05:24.291874 2115 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:05:24.293606 kubelet[2115]: E0906 00:05:24.293573 2115 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Sep 6 00:05:24.403964 systemd[1]: Created slice kubepods-burstable-pod9f52dddc506be6ff9a45c274b25ce3df.slice - libcontainer container kubepods-burstable-pod9f52dddc506be6ff9a45c274b25ce3df.slice. Sep 6 00:05:24.422013 kubelet[2115]: E0906 00:05:24.421907 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:05:24.428504 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 6 00:05:24.440548 kubelet[2115]: E0906 00:05:24.440503 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:05:24.445134 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 6 00:05:24.447295 kubelet[2115]: E0906 00:05:24.447265 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:05:24.472663 kubelet[2115]: I0906 00:05:24.472630 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:05:24.472777 kubelet[2115]: I0906 00:05:24.472721 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:05:24.472777 kubelet[2115]: I0906 00:05:24.472747 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f52dddc506be6ff9a45c274b25ce3df-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9f52dddc506be6ff9a45c274b25ce3df\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:05:24.472777 kubelet[2115]: I0906 00:05:24.472764 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f52dddc506be6ff9a45c274b25ce3df-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9f52dddc506be6ff9a45c274b25ce3df\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:05:24.472842 kubelet[2115]: I0906 00:05:24.472778 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:05:24.472842 kubelet[2115]: I0906 00:05:24.472793 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:05:24.472842 kubelet[2115]: I0906 00:05:24.472805 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:05:24.472842 kubelet[2115]: I0906 00:05:24.472818 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:05:24.472842 kubelet[2115]: I0906 00:05:24.472832 2115 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f52dddc506be6ff9a45c274b25ce3df-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9f52dddc506be6ff9a45c274b25ce3df\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:05:24.495977 kubelet[2115]: I0906 00:05:24.495468 2115 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:05:24.495977 kubelet[2115]: E0906 00:05:24.495792 2115 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Sep 6 00:05:24.672504 kubelet[2115]: E0906 00:05:24.672281 2115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="800ms" Sep 6 00:05:24.724218 kubelet[2115]: E0906 00:05:24.724183 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:24.724859 containerd[1433]: time="2025-09-06T00:05:24.724812653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9f52dddc506be6ff9a45c274b25ce3df,Namespace:kube-system,Attempt:0,}" Sep 6 00:05:24.741761 kubelet[2115]: E0906 00:05:24.741708 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:24.742494 containerd[1433]: time="2025-09-06T00:05:24.742188168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 6 00:05:24.748438 kubelet[2115]: E0906 00:05:24.748167 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:24.748661 containerd[1433]: time="2025-09-06T00:05:24.748553553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 6 00:05:24.897647 kubelet[2115]: I0906 00:05:24.897614 2115 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:05:24.897994 kubelet[2115]: E0906 00:05:24.897962 2115 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.93:6443/api/v1/nodes\": dial tcp 10.0.0.93:6443: connect: connection refused" node="localhost" Sep 6 00:05:24.903553 kubelet[2115]: E0906 00:05:24.903513 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.93:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 6 00:05:24.962299 kubelet[2115]: E0906 00:05:24.962193 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.93:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 6 00:05:25.203834 kubelet[2115]: E0906 00:05:25.203794 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.93:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 6 00:05:25.204917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1444747288.mount: Deactivated successfully. Sep 6 00:05:25.212086 containerd[1433]: time="2025-09-06T00:05:25.211915207Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:05:25.214927 containerd[1433]: time="2025-09-06T00:05:25.214803762Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 6 00:05:25.215997 containerd[1433]: time="2025-09-06T00:05:25.215961477Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:05:25.217440 containerd[1433]: time="2025-09-06T00:05:25.217403438Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:05:25.218207 containerd[1433]: time="2025-09-06T00:05:25.218170269Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 6 00:05:25.219521 containerd[1433]: time="2025-09-06T00:05:25.219479872Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:05:25.222653 containerd[1433]: time="2025-09-06T00:05:25.222606864Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 6 00:05:25.223170 containerd[1433]: time="2025-09-06T00:05:25.223143406Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 6 00:05:25.226023 containerd[1433]: time="2025-09-06T00:05:25.225972013Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 483.719715ms" Sep 6 00:05:25.227579 containerd[1433]: time="2025-09-06T00:05:25.227371998Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 478.75787ms" Sep 6 00:05:25.227864 containerd[1433]: time="2025-09-06T00:05:25.227836530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 502.945571ms" Sep 6 00:05:25.328157 containerd[1433]: time="2025-09-06T00:05:25.327403349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:05:25.328157 containerd[1433]: time="2025-09-06T00:05:25.327476318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:05:25.328157 containerd[1433]: time="2025-09-06T00:05:25.327493172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:25.328885 containerd[1433]: time="2025-09-06T00:05:25.328823863Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:05:25.328999 containerd[1433]: time="2025-09-06T00:05:25.328893677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:25.329290 containerd[1433]: time="2025-09-06T00:05:25.329251092Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:05:25.329326 containerd[1433]: time="2025-09-06T00:05:25.329304171Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:25.329434 containerd[1433]: time="2025-09-06T00:05:25.329404738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:25.331454 containerd[1433]: time="2025-09-06T00:05:25.330892789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:05:25.331454 containerd[1433]: time="2025-09-06T00:05:25.331353286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:05:25.331454 containerd[1433]: time="2025-09-06T00:05:25.331380485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:25.332527 containerd[1433]: time="2025-09-06T00:05:25.332433799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:25.350098 systemd[1]: Started cri-containerd-bfb1872c48a6eb5823dcb3023aad2e72239d9667b0040c8ed1b094efc58b9b3b.scope - libcontainer container bfb1872c48a6eb5823dcb3023aad2e72239d9667b0040c8ed1b094efc58b9b3b. Sep 6 00:05:25.351168 systemd[1]: Started cri-containerd-cfa57ad8a483990dc087a532f792db4a422a6157a30a636e983006c458a1a9f5.scope - libcontainer container cfa57ad8a483990dc087a532f792db4a422a6157a30a636e983006c458a1a9f5. Sep 6 00:05:25.355279 systemd[1]: Started cri-containerd-7894a78b70a9ba9c97a834c60a05116e98b037b75a62e1965fea2d021deab1df.scope - libcontainer container 7894a78b70a9ba9c97a834c60a05116e98b037b75a62e1965fea2d021deab1df. Sep 6 00:05:25.380469 containerd[1433]: time="2025-09-06T00:05:25.380434567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9f52dddc506be6ff9a45c274b25ce3df,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfb1872c48a6eb5823dcb3023aad2e72239d9667b0040c8ed1b094efc58b9b3b\"" Sep 6 00:05:25.382537 kubelet[2115]: E0906 00:05:25.382507 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:25.388873 containerd[1433]: time="2025-09-06T00:05:25.388833480Z" level=info msg="CreateContainer within sandbox \"bfb1872c48a6eb5823dcb3023aad2e72239d9667b0040c8ed1b094efc58b9b3b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 6 00:05:25.396425 containerd[1433]: time="2025-09-06T00:05:25.396378775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfa57ad8a483990dc087a532f792db4a422a6157a30a636e983006c458a1a9f5\"" Sep 6 00:05:25.397068 kubelet[2115]: E0906 00:05:25.397022 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:25.399060 containerd[1433]: time="2025-09-06T00:05:25.399033966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"7894a78b70a9ba9c97a834c60a05116e98b037b75a62e1965fea2d021deab1df\"" Sep 6 00:05:25.399553 kubelet[2115]: E0906 00:05:25.399536 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:25.402729 containerd[1433]: time="2025-09-06T00:05:25.402690551Z" level=info msg="CreateContainer within sandbox \"cfa57ad8a483990dc087a532f792db4a422a6157a30a636e983006c458a1a9f5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 6 00:05:25.404430 containerd[1433]: time="2025-09-06T00:05:25.404401422Z" level=info msg="CreateContainer within sandbox \"7894a78b70a9ba9c97a834c60a05116e98b037b75a62e1965fea2d021deab1df\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 6 00:05:25.408319 containerd[1433]: time="2025-09-06T00:05:25.408287896Z" level=info msg="CreateContainer within sandbox \"bfb1872c48a6eb5823dcb3023aad2e72239d9667b0040c8ed1b094efc58b9b3b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"2bc40094975aa3af8b4e90f34c55769105cb4bdd8d68c6555115810a122c1fe0\"" Sep 6 00:05:25.408997 containerd[1433]: time="2025-09-06T00:05:25.408966101Z" level=info msg="StartContainer for \"2bc40094975aa3af8b4e90f34c55769105cb4bdd8d68c6555115810a122c1fe0\"" Sep 6 00:05:25.421212 containerd[1433]: time="2025-09-06T00:05:25.421175245Z" level=info msg="CreateContainer within sandbox \"7894a78b70a9ba9c97a834c60a05116e98b037b75a62e1965fea2d021deab1df\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"16bf0a4d615be033a072124f85660ca865310c653df127647b86a7401572b725\"" Sep 6 00:05:25.422728 containerd[1433]: time="2025-09-06T00:05:25.421606387Z" level=info msg="StartContainer for \"16bf0a4d615be033a072124f85660ca865310c653df127647b86a7401572b725\"" Sep 6 00:05:25.424203 containerd[1433]: time="2025-09-06T00:05:25.424167323Z" level=info msg="CreateContainer within sandbox \"cfa57ad8a483990dc087a532f792db4a422a6157a30a636e983006c458a1a9f5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fa3e6d9d9a446bdbd1e231a92c3bde564ba0fd510599458eb8c7df31864bfb8a\"" Sep 6 00:05:25.424715 containerd[1433]: time="2025-09-06T00:05:25.424679701Z" level=info msg="StartContainer for \"fa3e6d9d9a446bdbd1e231a92c3bde564ba0fd510599458eb8c7df31864bfb8a\"" Sep 6 00:05:25.435090 systemd[1]: Started cri-containerd-2bc40094975aa3af8b4e90f34c55769105cb4bdd8d68c6555115810a122c1fe0.scope - libcontainer container 2bc40094975aa3af8b4e90f34c55769105cb4bdd8d68c6555115810a122c1fe0. Sep 6 00:05:25.457090 systemd[1]: Started cri-containerd-16bf0a4d615be033a072124f85660ca865310c653df127647b86a7401572b725.scope - libcontainer container 16bf0a4d615be033a072124f85660ca865310c653df127647b86a7401572b725. Sep 6 00:05:25.458756 kubelet[2115]: E0906 00:05:25.458509 2115 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.93:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.93:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 6 00:05:25.458643 systemd[1]: Started cri-containerd-fa3e6d9d9a446bdbd1e231a92c3bde564ba0fd510599458eb8c7df31864bfb8a.scope - libcontainer container fa3e6d9d9a446bdbd1e231a92c3bde564ba0fd510599458eb8c7df31864bfb8a. Sep 6 00:05:25.472796 kubelet[2115]: E0906 00:05:25.472758 2115 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.93:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.93:6443: connect: connection refused" interval="1.6s" Sep 6 00:05:25.472982 containerd[1433]: time="2025-09-06T00:05:25.472896859Z" level=info msg="StartContainer for \"2bc40094975aa3af8b4e90f34c55769105cb4bdd8d68c6555115810a122c1fe0\" returns successfully" Sep 6 00:05:25.493208 containerd[1433]: time="2025-09-06T00:05:25.493164994Z" level=info msg="StartContainer for \"fa3e6d9d9a446bdbd1e231a92c3bde564ba0fd510599458eb8c7df31864bfb8a\" returns successfully" Sep 6 00:05:25.503196 containerd[1433]: time="2025-09-06T00:05:25.503126605Z" level=info msg="StartContainer for \"16bf0a4d615be033a072124f85660ca865310c653df127647b86a7401572b725\" returns successfully" Sep 6 00:05:25.699355 kubelet[2115]: I0906 00:05:25.699297 2115 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:05:26.097423 kubelet[2115]: E0906 00:05:26.097390 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:05:26.097533 kubelet[2115]: E0906 00:05:26.097515 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:26.099041 kubelet[2115]: E0906 00:05:26.099003 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:05:26.099116 kubelet[2115]: E0906 00:05:26.099102 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:26.100577 kubelet[2115]: E0906 00:05:26.100557 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:05:26.100668 kubelet[2115]: E0906 00:05:26.100653 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:27.076266 kubelet[2115]: E0906 00:05:27.076216 2115 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 6 00:05:27.094991 kubelet[2115]: I0906 00:05:27.094963 2115 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 6 00:05:27.094991 kubelet[2115]: E0906 00:05:27.094995 2115 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 6 00:05:27.105598 kubelet[2115]: E0906 00:05:27.105242 2115 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:05:27.106165 kubelet[2115]: E0906 00:05:27.106041 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:05:27.106277 kubelet[2115]: E0906 00:05:27.106261 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:27.106714 kubelet[2115]: E0906 00:05:27.106698 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:05:27.106920 kubelet[2115]: E0906 00:05:27.106904 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:27.117192 kubelet[2115]: E0906 00:05:27.117107 2115 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.186288b6f8780c68 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-06 00:05:24.061957224 +0000 UTC m=+0.822784212,LastTimestamp:2025-09-06 00:05:24.061957224 +0000 UTC m=+0.822784212,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 6 00:05:27.205960 kubelet[2115]: E0906 00:05:27.205394 2115 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:05:27.260833 kubelet[2115]: E0906 00:05:27.260804 2115 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 6 00:05:27.261064 kubelet[2115]: E0906 00:05:27.260925 2115 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:27.305886 kubelet[2115]: E0906 00:05:27.305854 2115 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:05:27.407093 kubelet[2115]: E0906 00:05:27.406931 2115 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:05:27.507698 kubelet[2115]: E0906 00:05:27.507663 2115 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:05:27.608653 kubelet[2115]: E0906 00:05:27.608605 2115 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:05:27.770137 kubelet[2115]: I0906 00:05:27.770097 2115 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 00:05:27.775427 kubelet[2115]: E0906 00:05:27.775393 2115 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 6 00:05:27.775427 kubelet[2115]: I0906 00:05:27.775420 2115 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:05:27.777760 kubelet[2115]: E0906 00:05:27.777716 2115 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:05:27.777760 kubelet[2115]: I0906 00:05:27.777753 2115 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 00:05:27.779311 kubelet[2115]: E0906 00:05:27.779282 2115 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 6 00:05:28.058601 kubelet[2115]: I0906 00:05:28.057459 2115 apiserver.go:52] "Watching apiserver" Sep 6 00:05:28.070102 kubelet[2115]: I0906 00:05:28.070060 2115 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:05:29.180023 systemd[1]: Reloading requested from client PID 2412 ('systemctl') (unit session-7.scope)... Sep 6 00:05:29.180042 systemd[1]: Reloading... Sep 6 00:05:29.272962 zram_generator::config[2454]: No configuration found. Sep 6 00:05:29.370115 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 6 00:05:29.437612 systemd[1]: Reloading finished in 257 ms. Sep 6 00:05:29.490263 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:05:29.503114 systemd[1]: kubelet.service: Deactivated successfully. Sep 6 00:05:29.504029 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:05:29.504114 systemd[1]: kubelet.service: Consumed 1.168s CPU time, 132.0M memory peak, 0B memory swap peak. Sep 6 00:05:29.513219 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 6 00:05:29.621830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 6 00:05:29.627420 (kubelet)[2493]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 6 00:05:29.676312 kubelet[2493]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:05:29.676312 kubelet[2493]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 6 00:05:29.676312 kubelet[2493]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 6 00:05:29.676656 kubelet[2493]: I0906 00:05:29.676347 2493 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 6 00:05:29.683461 kubelet[2493]: I0906 00:05:29.683330 2493 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 6 00:05:29.683461 kubelet[2493]: I0906 00:05:29.683357 2493 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 6 00:05:29.683580 kubelet[2493]: I0906 00:05:29.683544 2493 server.go:956] "Client rotation is on, will bootstrap in background" Sep 6 00:05:29.687315 kubelet[2493]: I0906 00:05:29.687212 2493 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 6 00:05:29.691871 kubelet[2493]: I0906 00:05:29.691780 2493 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 6 00:05:29.694859 kubelet[2493]: E0906 00:05:29.694707 2493 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 6 00:05:29.694859 kubelet[2493]: I0906 00:05:29.694772 2493 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 6 00:05:29.697225 kubelet[2493]: I0906 00:05:29.697207 2493 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 6 00:05:29.697414 kubelet[2493]: I0906 00:05:29.697390 2493 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 6 00:05:29.697584 kubelet[2493]: I0906 00:05:29.697416 2493 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 6 00:05:29.697654 kubelet[2493]: I0906 00:05:29.697591 2493 topology_manager.go:138] "Creating topology manager with none policy" Sep 6 00:05:29.697654 kubelet[2493]: I0906 00:05:29.697600 2493 container_manager_linux.go:303] "Creating device plugin manager" Sep 6 00:05:29.697654 kubelet[2493]: I0906 00:05:29.697640 2493 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:05:29.697803 kubelet[2493]: I0906 00:05:29.697790 2493 kubelet.go:480] "Attempting to sync node with API server" Sep 6 00:05:29.697834 kubelet[2493]: I0906 00:05:29.697808 2493 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 6 00:05:29.697834 kubelet[2493]: I0906 00:05:29.697830 2493 kubelet.go:386] "Adding apiserver pod source" Sep 6 00:05:29.697875 kubelet[2493]: I0906 00:05:29.697842 2493 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 6 00:05:29.699748 kubelet[2493]: I0906 00:05:29.698776 2493 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 6 00:05:29.699748 kubelet[2493]: I0906 00:05:29.699308 2493 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 6 00:05:29.711977 kubelet[2493]: I0906 00:05:29.710223 2493 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 6 00:05:29.711977 kubelet[2493]: I0906 00:05:29.710265 2493 server.go:1289] "Started kubelet" Sep 6 00:05:29.714554 kubelet[2493]: I0906 00:05:29.714528 2493 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 6 00:05:29.715069 kubelet[2493]: I0906 00:05:29.715033 2493 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 6 00:05:29.718096 kubelet[2493]: I0906 00:05:29.718024 2493 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 6 00:05:29.718411 kubelet[2493]: I0906 00:05:29.718394 2493 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 6 00:05:29.718524 kubelet[2493]: I0906 00:05:29.718408 2493 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 6 00:05:29.719332 kubelet[2493]: I0906 00:05:29.719313 2493 server.go:317] "Adding debug handlers to kubelet server" Sep 6 00:05:29.719455 kubelet[2493]: I0906 00:05:29.719430 2493 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 6 00:05:29.719634 kubelet[2493]: E0906 00:05:29.719613 2493 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 6 00:05:29.720083 kubelet[2493]: I0906 00:05:29.720048 2493 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 6 00:05:29.720199 kubelet[2493]: I0906 00:05:29.720169 2493 reconciler.go:26] "Reconciler: start to sync state" Sep 6 00:05:29.721466 kubelet[2493]: I0906 00:05:29.721444 2493 factory.go:223] Registration of the systemd container factory successfully Sep 6 00:05:29.723703 kubelet[2493]: I0906 00:05:29.723669 2493 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 6 00:05:29.726284 kubelet[2493]: I0906 00:05:29.726255 2493 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 6 00:05:29.727118 kubelet[2493]: E0906 00:05:29.726704 2493 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 6 00:05:29.728274 kubelet[2493]: I0906 00:05:29.728237 2493 factory.go:223] Registration of the containerd container factory successfully Sep 6 00:05:29.732723 kubelet[2493]: I0906 00:05:29.732699 2493 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 6 00:05:29.732825 kubelet[2493]: I0906 00:05:29.732814 2493 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 6 00:05:29.732889 kubelet[2493]: I0906 00:05:29.732878 2493 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 6 00:05:29.732944 kubelet[2493]: I0906 00:05:29.732926 2493 kubelet.go:2436] "Starting kubelet main sync loop" Sep 6 00:05:29.733040 kubelet[2493]: E0906 00:05:29.733022 2493 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 6 00:05:29.756025 kubelet[2493]: I0906 00:05:29.755999 2493 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 6 00:05:29.756025 kubelet[2493]: I0906 00:05:29.756015 2493 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 6 00:05:29.756135 kubelet[2493]: I0906 00:05:29.756037 2493 state_mem.go:36] "Initialized new in-memory state store" Sep 6 00:05:29.756164 kubelet[2493]: I0906 00:05:29.756157 2493 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 6 00:05:29.756188 kubelet[2493]: I0906 00:05:29.756168 2493 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 6 00:05:29.756188 kubelet[2493]: I0906 00:05:29.756185 2493 policy_none.go:49] "None policy: Start" Sep 6 00:05:29.756230 kubelet[2493]: I0906 00:05:29.756194 2493 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 6 00:05:29.756230 kubelet[2493]: I0906 00:05:29.756203 2493 state_mem.go:35] "Initializing new in-memory state store" Sep 6 00:05:29.756324 kubelet[2493]: I0906 00:05:29.756311 2493 state_mem.go:75] "Updated machine memory state" Sep 6 00:05:29.761784 kubelet[2493]: E0906 00:05:29.761637 2493 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 6 00:05:29.761863 kubelet[2493]: I0906 00:05:29.761823 2493 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 6 00:05:29.761863 kubelet[2493]: I0906 00:05:29.761835 2493 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 6 00:05:29.762046 kubelet[2493]: I0906 00:05:29.762026 2493 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 6 00:05:29.763043 kubelet[2493]: E0906 00:05:29.763018 2493 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 6 00:05:29.834211 kubelet[2493]: I0906 00:05:29.833959 2493 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:05:29.834211 kubelet[2493]: I0906 00:05:29.834094 2493 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 6 00:05:29.834211 kubelet[2493]: I0906 00:05:29.834148 2493 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 00:05:29.866257 kubelet[2493]: I0906 00:05:29.866230 2493 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 6 00:05:29.875823 kubelet[2493]: I0906 00:05:29.875792 2493 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 6 00:05:29.875929 kubelet[2493]: I0906 00:05:29.875876 2493 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 6 00:05:30.021734 kubelet[2493]: I0906 00:05:30.021679 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9f52dddc506be6ff9a45c274b25ce3df-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9f52dddc506be6ff9a45c274b25ce3df\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:05:30.021734 kubelet[2493]: I0906 00:05:30.021724 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9f52dddc506be6ff9a45c274b25ce3df-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9f52dddc506be6ff9a45c274b25ce3df\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:05:30.021909 kubelet[2493]: I0906 00:05:30.021746 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9f52dddc506be6ff9a45c274b25ce3df-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9f52dddc506be6ff9a45c274b25ce3df\") " pod="kube-system/kube-apiserver-localhost" Sep 6 00:05:30.021909 kubelet[2493]: I0906 00:05:30.021802 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:05:30.021909 kubelet[2493]: I0906 00:05:30.021837 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:05:30.021909 kubelet[2493]: I0906 00:05:30.021880 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:05:30.021909 kubelet[2493]: I0906 00:05:30.021909 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 6 00:05:30.022094 kubelet[2493]: I0906 00:05:30.021954 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:05:30.022094 kubelet[2493]: I0906 00:05:30.021973 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 6 00:05:30.141331 kubelet[2493]: E0906 00:05:30.141252 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:30.142006 kubelet[2493]: E0906 00:05:30.141970 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:30.144807 kubelet[2493]: E0906 00:05:30.144596 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:30.183666 sudo[2534]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 6 00:05:30.183952 sudo[2534]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 6 00:05:30.631721 sudo[2534]: pam_unix(sudo:session): session closed for user root Sep 6 00:05:30.698466 kubelet[2493]: I0906 00:05:30.698371 2493 apiserver.go:52] "Watching apiserver" Sep 6 00:05:30.720263 kubelet[2493]: I0906 00:05:30.720222 2493 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 6 00:05:30.743692 kubelet[2493]: I0906 00:05:30.743667 2493 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 6 00:05:30.744011 kubelet[2493]: E0906 00:05:30.743995 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:30.745662 kubelet[2493]: I0906 00:05:30.745450 2493 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:05:30.752377 kubelet[2493]: E0906 00:05:30.752350 2493 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 6 00:05:30.752489 kubelet[2493]: E0906 00:05:30.752466 2493 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 6 00:05:30.752621 kubelet[2493]: E0906 00:05:30.752604 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:30.752700 kubelet[2493]: E0906 00:05:30.752654 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:30.768171 kubelet[2493]: I0906 00:05:30.767993 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.767981406 podStartE2EDuration="1.767981406s" podCreationTimestamp="2025-09-06 00:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:05:30.766682929 +0000 UTC m=+1.129096563" watchObservedRunningTime="2025-09-06 00:05:30.767981406 +0000 UTC m=+1.130395039" Sep 6 00:05:30.780366 kubelet[2493]: I0906 00:05:30.780270 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.780259099 podStartE2EDuration="1.780259099s" podCreationTimestamp="2025-09-06 00:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:05:30.774211624 +0000 UTC m=+1.136625258" watchObservedRunningTime="2025-09-06 00:05:30.780259099 +0000 UTC m=+1.142672733" Sep 6 00:05:30.788259 kubelet[2493]: I0906 00:05:30.788213 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.7882021670000001 podStartE2EDuration="1.788202167s" podCreationTimestamp="2025-09-06 00:05:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:05:30.78051339 +0000 UTC m=+1.142927024" watchObservedRunningTime="2025-09-06 00:05:30.788202167 +0000 UTC m=+1.150615761" Sep 6 00:05:31.747291 kubelet[2493]: E0906 00:05:31.747252 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:31.751137 kubelet[2493]: E0906 00:05:31.750837 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:31.751137 kubelet[2493]: E0906 00:05:31.751071 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:32.724652 sudo[1616]: pam_unix(sudo:session): session closed for user root Sep 6 00:05:32.727031 sshd[1613]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:32.731016 systemd[1]: sshd@6-10.0.0.93:22-10.0.0.1:52960.service: Deactivated successfully. Sep 6 00:05:32.732387 systemd[1]: session-7.scope: Deactivated successfully. Sep 6 00:05:32.733971 systemd[1]: session-7.scope: Consumed 8.096s CPU time, 152.2M memory peak, 0B memory swap peak. Sep 6 00:05:32.734631 systemd-logind[1422]: Session 7 logged out. Waiting for processes to exit. Sep 6 00:05:32.735607 systemd-logind[1422]: Removed session 7. Sep 6 00:05:32.747842 kubelet[2493]: E0906 00:05:32.747758 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:32.749193 kubelet[2493]: E0906 00:05:32.749163 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:33.776213 kubelet[2493]: E0906 00:05:33.776171 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:34.106101 kubelet[2493]: E0906 00:05:34.105867 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:35.340241 kubelet[2493]: I0906 00:05:35.340175 2493 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 6 00:05:35.340809 containerd[1433]: time="2025-09-06T00:05:35.340726608Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 6 00:05:35.341063 kubelet[2493]: I0906 00:05:35.340917 2493 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 6 00:05:36.307828 systemd[1]: Created slice kubepods-besteffort-pod8923165b_17fc_45a2_91aa_c184dad528fb.slice - libcontainer container kubepods-besteffort-pod8923165b_17fc_45a2_91aa_c184dad528fb.slice. Sep 6 00:05:36.331124 systemd[1]: Created slice kubepods-burstable-pod82e02273_117b_4b61_8c77_b8cc92f40c43.slice - libcontainer container kubepods-burstable-pod82e02273_117b_4b61_8c77_b8cc92f40c43.slice. Sep 6 00:05:36.369422 kubelet[2493]: I0906 00:05:36.369287 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-bpf-maps\") pod \"cilium-98fpz\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " pod="kube-system/cilium-98fpz" Sep 6 00:05:36.369422 kubelet[2493]: I0906 00:05:36.369339 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-host-proc-sys-kernel\") pod \"cilium-98fpz\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " pod="kube-system/cilium-98fpz" Sep 6 00:05:36.369422 kubelet[2493]: I0906 00:05:36.369376 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8923165b-17fc-45a2-91aa-c184dad528fb-lib-modules\") pod \"kube-proxy-g85b8\" (UID: \"8923165b-17fc-45a2-91aa-c184dad528fb\") " pod="kube-system/kube-proxy-g85b8" Sep 6 00:05:36.369983 kubelet[2493]: I0906 00:05:36.369397 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zggl\" (UniqueName: \"kubernetes.io/projected/8923165b-17fc-45a2-91aa-c184dad528fb-kube-api-access-8zggl\") pod \"kube-proxy-g85b8\" (UID: \"8923165b-17fc-45a2-91aa-c184dad528fb\") " pod="kube-system/kube-proxy-g85b8" Sep 6 00:05:36.369983 kubelet[2493]: I0906 00:05:36.369530 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-xtables-lock\") pod \"cilium-98fpz\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " pod="kube-system/cilium-98fpz" Sep 6 00:05:36.369983 kubelet[2493]: I0906 00:05:36.369545 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82e02273-117b-4b61-8c77-b8cc92f40c43-hubble-tls\") pod \"cilium-98fpz\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " pod="kube-system/cilium-98fpz" Sep 6 00:05:36.370679 kubelet[2493]: I0906 00:05:36.369561 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8923165b-17fc-45a2-91aa-c184dad528fb-kube-proxy\") pod \"kube-proxy-g85b8\" (UID: \"8923165b-17fc-45a2-91aa-c184dad528fb\") " pod="kube-system/kube-proxy-g85b8" Sep 6 00:05:36.370679 kubelet[2493]: I0906 00:05:36.370225 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8923165b-17fc-45a2-91aa-c184dad528fb-xtables-lock\") pod \"kube-proxy-g85b8\" (UID: \"8923165b-17fc-45a2-91aa-c184dad528fb\") " pod="kube-system/kube-proxy-g85b8" Sep 6 00:05:36.370679 kubelet[2493]: I0906 00:05:36.370256 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-cilium-run\") pod \"cilium-98fpz\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " pod="kube-system/cilium-98fpz" Sep 6 00:05:36.370679 kubelet[2493]: I0906 00:05:36.370272 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-hostproc\") pod \"cilium-98fpz\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " pod="kube-system/cilium-98fpz" Sep 6 00:05:36.370679 kubelet[2493]: I0906 00:05:36.370308 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-cilium-cgroup\") pod \"cilium-98fpz\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " pod="kube-system/cilium-98fpz" Sep 6 00:05:36.370679 kubelet[2493]: I0906 00:05:36.370324 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-cni-path\") pod \"cilium-98fpz\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " pod="kube-system/cilium-98fpz" Sep 6 00:05:36.370920 kubelet[2493]: I0906 00:05:36.370337 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-etc-cni-netd\") pod \"cilium-98fpz\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " pod="kube-system/cilium-98fpz" Sep 6 00:05:36.370920 kubelet[2493]: I0906 00:05:36.370380 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-lib-modules\") pod \"cilium-98fpz\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " pod="kube-system/cilium-98fpz" Sep 6 00:05:36.370920 kubelet[2493]: I0906 00:05:36.370399 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82e02273-117b-4b61-8c77-b8cc92f40c43-clustermesh-secrets\") pod \"cilium-98fpz\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " pod="kube-system/cilium-98fpz" Sep 6 00:05:36.370920 kubelet[2493]: I0906 00:05:36.370420 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82e02273-117b-4b61-8c77-b8cc92f40c43-cilium-config-path\") pod \"cilium-98fpz\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " pod="kube-system/cilium-98fpz" Sep 6 00:05:36.370920 kubelet[2493]: I0906 00:05:36.370434 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-host-proc-sys-net\") pod \"cilium-98fpz\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " pod="kube-system/cilium-98fpz" Sep 6 00:05:36.371123 kubelet[2493]: I0906 00:05:36.370465 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85lfb\" (UniqueName: \"kubernetes.io/projected/82e02273-117b-4b61-8c77-b8cc92f40c43-kube-api-access-85lfb\") pod \"cilium-98fpz\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " pod="kube-system/cilium-98fpz" Sep 6 00:05:36.559875 systemd[1]: Created slice kubepods-besteffort-pod6bd01021_bacd_4264_afa2_0e297c8a13db.slice - libcontainer container kubepods-besteffort-pod6bd01021_bacd_4264_afa2_0e297c8a13db.slice. Sep 6 00:05:36.571634 kubelet[2493]: I0906 00:05:36.571592 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bd01021-bacd-4264-afa2-0e297c8a13db-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-f5tnw\" (UID: \"6bd01021-bacd-4264-afa2-0e297c8a13db\") " pod="kube-system/cilium-operator-6c4d7847fc-f5tnw" Sep 6 00:05:36.571634 kubelet[2493]: I0906 00:05:36.571639 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr4hc\" (UniqueName: \"kubernetes.io/projected/6bd01021-bacd-4264-afa2-0e297c8a13db-kube-api-access-jr4hc\") pod \"cilium-operator-6c4d7847fc-f5tnw\" (UID: \"6bd01021-bacd-4264-afa2-0e297c8a13db\") " pod="kube-system/cilium-operator-6c4d7847fc-f5tnw" Sep 6 00:05:36.629049 kubelet[2493]: E0906 00:05:36.629006 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:36.629623 containerd[1433]: time="2025-09-06T00:05:36.629583404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g85b8,Uid:8923165b-17fc-45a2-91aa-c184dad528fb,Namespace:kube-system,Attempt:0,}" Sep 6 00:05:36.634762 kubelet[2493]: E0906 00:05:36.634482 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:36.634907 containerd[1433]: time="2025-09-06T00:05:36.634855346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-98fpz,Uid:82e02273-117b-4b61-8c77-b8cc92f40c43,Namespace:kube-system,Attempt:0,}" Sep 6 00:05:36.659601 containerd[1433]: time="2025-09-06T00:05:36.659401767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:05:36.660185 containerd[1433]: time="2025-09-06T00:05:36.659969040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:05:36.660185 containerd[1433]: time="2025-09-06T00:05:36.660005770Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:36.660185 containerd[1433]: time="2025-09-06T00:05:36.660104036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:36.668786 containerd[1433]: time="2025-09-06T00:05:36.668524428Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:05:36.668889 containerd[1433]: time="2025-09-06T00:05:36.668805543Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:05:36.669735 containerd[1433]: time="2025-09-06T00:05:36.669595677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:36.670010 containerd[1433]: time="2025-09-06T00:05:36.669979860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:36.684233 systemd[1]: Started cri-containerd-539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3.scope - libcontainer container 539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3. Sep 6 00:05:36.688910 systemd[1]: Started cri-containerd-ced93641658ce4b43ea93e9cc6cc5b3b424dd2029ad938104764648008327966.scope - libcontainer container ced93641658ce4b43ea93e9cc6cc5b3b424dd2029ad938104764648008327966. Sep 6 00:05:36.706519 containerd[1433]: time="2025-09-06T00:05:36.706483106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-98fpz,Uid:82e02273-117b-4b61-8c77-b8cc92f40c43,Namespace:kube-system,Attempt:0,} returns sandbox id \"539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3\"" Sep 6 00:05:36.707158 kubelet[2493]: E0906 00:05:36.707129 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:36.710695 containerd[1433]: time="2025-09-06T00:05:36.710526276Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 6 00:05:36.711981 containerd[1433]: time="2025-09-06T00:05:36.711701714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g85b8,Uid:8923165b-17fc-45a2-91aa-c184dad528fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ced93641658ce4b43ea93e9cc6cc5b3b424dd2029ad938104764648008327966\"" Sep 6 00:05:36.712288 kubelet[2493]: E0906 00:05:36.712259 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:36.717134 containerd[1433]: time="2025-09-06T00:05:36.717049596Z" level=info msg="CreateContainer within sandbox \"ced93641658ce4b43ea93e9cc6cc5b3b424dd2029ad938104764648008327966\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 6 00:05:36.729583 containerd[1433]: time="2025-09-06T00:05:36.729535164Z" level=info msg="CreateContainer within sandbox \"ced93641658ce4b43ea93e9cc6cc5b3b424dd2029ad938104764648008327966\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"367f4e20ec73bb53158175446e4e0d9981dabb2ae72546e68cbb8a7162e773b6\"" Sep 6 00:05:36.730191 containerd[1433]: time="2025-09-06T00:05:36.730155691Z" level=info msg="StartContainer for \"367f4e20ec73bb53158175446e4e0d9981dabb2ae72546e68cbb8a7162e773b6\"" Sep 6 00:05:36.761096 systemd[1]: Started cri-containerd-367f4e20ec73bb53158175446e4e0d9981dabb2ae72546e68cbb8a7162e773b6.scope - libcontainer container 367f4e20ec73bb53158175446e4e0d9981dabb2ae72546e68cbb8a7162e773b6. Sep 6 00:05:36.782520 containerd[1433]: time="2025-09-06T00:05:36.782479444Z" level=info msg="StartContainer for \"367f4e20ec73bb53158175446e4e0d9981dabb2ae72546e68cbb8a7162e773b6\" returns successfully" Sep 6 00:05:36.865110 kubelet[2493]: E0906 00:05:36.864468 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:36.865235 containerd[1433]: time="2025-09-06T00:05:36.865114252Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-f5tnw,Uid:6bd01021-bacd-4264-afa2-0e297c8a13db,Namespace:kube-system,Attempt:0,}" Sep 6 00:05:36.888262 containerd[1433]: time="2025-09-06T00:05:36.886401674Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:05:36.888262 containerd[1433]: time="2025-09-06T00:05:36.886468772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:05:36.888262 containerd[1433]: time="2025-09-06T00:05:36.886483856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:36.888262 containerd[1433]: time="2025-09-06T00:05:36.886585124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:36.909147 systemd[1]: Started cri-containerd-a2111fd4b0fafc85197561ee17e7cd0593503c29c8ffb413916ef1c68a184677.scope - libcontainer container a2111fd4b0fafc85197561ee17e7cd0593503c29c8ffb413916ef1c68a184677. Sep 6 00:05:36.945212 containerd[1433]: time="2025-09-06T00:05:36.945170886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-f5tnw,Uid:6bd01021-bacd-4264-afa2-0e297c8a13db,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2111fd4b0fafc85197561ee17e7cd0593503c29c8ffb413916ef1c68a184677\"" Sep 6 00:05:36.945840 kubelet[2493]: E0906 00:05:36.945818 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:37.760747 kubelet[2493]: E0906 00:05:37.760640 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:37.769582 kubelet[2493]: I0906 00:05:37.769418 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g85b8" podStartSLOduration=1.7694027079999999 podStartE2EDuration="1.769402708s" podCreationTimestamp="2025-09-06 00:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:05:37.769387784 +0000 UTC m=+8.131801418" watchObservedRunningTime="2025-09-06 00:05:37.769402708 +0000 UTC m=+8.131816342" Sep 6 00:05:38.766670 kubelet[2493]: E0906 00:05:38.766630 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:41.337432 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4220330237.mount: Deactivated successfully. Sep 6 00:05:41.718308 kubelet[2493]: E0906 00:05:41.718202 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:41.783565 kubelet[2493]: E0906 00:05:41.782188 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:42.598096 containerd[1433]: time="2025-09-06T00:05:42.598039529Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:42.598499 containerd[1433]: time="2025-09-06T00:05:42.598436727Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 6 00:05:42.599730 containerd[1433]: time="2025-09-06T00:05:42.599659127Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:42.601444 containerd[1433]: time="2025-09-06T00:05:42.601414671Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.890821457s" Sep 6 00:05:42.601512 containerd[1433]: time="2025-09-06T00:05:42.601447117Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 6 00:05:42.602316 containerd[1433]: time="2025-09-06T00:05:42.602291802Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 6 00:05:42.607818 containerd[1433]: time="2025-09-06T00:05:42.607675017Z" level=info msg="CreateContainer within sandbox \"539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:05:42.634650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1893042365.mount: Deactivated successfully. Sep 6 00:05:42.635623 containerd[1433]: time="2025-09-06T00:05:42.635581923Z" level=info msg="CreateContainer within sandbox \"539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4\"" Sep 6 00:05:42.636997 containerd[1433]: time="2025-09-06T00:05:42.636929227Z" level=info msg="StartContainer for \"d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4\"" Sep 6 00:05:42.665117 systemd[1]: Started cri-containerd-d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4.scope - libcontainer container d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4. Sep 6 00:05:42.687587 containerd[1433]: time="2025-09-06T00:05:42.687544022Z" level=info msg="StartContainer for \"d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4\" returns successfully" Sep 6 00:05:42.700873 systemd[1]: cri-containerd-d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4.scope: Deactivated successfully. Sep 6 00:05:42.769198 containerd[1433]: time="2025-09-06T00:05:42.764713579Z" level=info msg="shim disconnected" id=d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4 namespace=k8s.io Sep 6 00:05:42.769198 containerd[1433]: time="2025-09-06T00:05:42.769191936Z" level=warning msg="cleaning up after shim disconnected" id=d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4 namespace=k8s.io Sep 6 00:05:42.769198 containerd[1433]: time="2025-09-06T00:05:42.769205778Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:05:42.784848 kubelet[2493]: E0906 00:05:42.784799 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:43.632115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4-rootfs.mount: Deactivated successfully. Sep 6 00:05:43.788099 kubelet[2493]: E0906 00:05:43.788071 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:43.794814 kubelet[2493]: E0906 00:05:43.794761 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:43.800017 containerd[1433]: time="2025-09-06T00:05:43.799232596Z" level=info msg="CreateContainer within sandbox \"539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:05:43.831201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1941809262.mount: Deactivated successfully. Sep 6 00:05:43.845704 containerd[1433]: time="2025-09-06T00:05:43.845658677Z" level=info msg="CreateContainer within sandbox \"539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871\"" Sep 6 00:05:43.846366 containerd[1433]: time="2025-09-06T00:05:43.846336443Z" level=info msg="StartContainer for \"56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871\"" Sep 6 00:05:43.880163 systemd[1]: Started cri-containerd-56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871.scope - libcontainer container 56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871. Sep 6 00:05:43.902960 containerd[1433]: time="2025-09-06T00:05:43.902596395Z" level=info msg="StartContainer for \"56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871\" returns successfully" Sep 6 00:05:43.911956 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 6 00:05:43.912188 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:05:43.912253 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 6 00:05:43.919313 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 6 00:05:43.919486 systemd[1]: cri-containerd-56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871.scope: Deactivated successfully. Sep 6 00:05:43.945057 containerd[1433]: time="2025-09-06T00:05:43.944995046Z" level=info msg="shim disconnected" id=56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871 namespace=k8s.io Sep 6 00:05:43.945057 containerd[1433]: time="2025-09-06T00:05:43.945056617Z" level=warning msg="cleaning up after shim disconnected" id=56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871 namespace=k8s.io Sep 6 00:05:43.945241 containerd[1433]: time="2025-09-06T00:05:43.945065899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:05:43.950686 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 6 00:05:44.094986 update_engine[1423]: I20250906 00:05:44.094868 1423 update_attempter.cc:509] Updating boot flags... Sep 6 00:05:44.119105 kubelet[2493]: E0906 00:05:44.119073 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:44.132231 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3052) Sep 6 00:05:44.197957 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3056) Sep 6 00:05:44.234510 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3056) Sep 6 00:05:44.294700 containerd[1433]: time="2025-09-06T00:05:44.294638917Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:44.296053 containerd[1433]: time="2025-09-06T00:05:44.296010920Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 6 00:05:44.297199 containerd[1433]: time="2025-09-06T00:05:44.297159043Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 6 00:05:44.298673 containerd[1433]: time="2025-09-06T00:05:44.298642145Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.696317337s" Sep 6 00:05:44.298788 containerd[1433]: time="2025-09-06T00:05:44.298770008Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 6 00:05:44.302425 containerd[1433]: time="2025-09-06T00:05:44.302392889Z" level=info msg="CreateContainer within sandbox \"a2111fd4b0fafc85197561ee17e7cd0593503c29c8ffb413916ef1c68a184677\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 6 00:05:44.313567 containerd[1433]: time="2025-09-06T00:05:44.313524659Z" level=info msg="CreateContainer within sandbox \"a2111fd4b0fafc85197561ee17e7cd0593503c29c8ffb413916ef1c68a184677\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40\"" Sep 6 00:05:44.314329 containerd[1433]: time="2025-09-06T00:05:44.314304597Z" level=info msg="StartContainer for \"839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40\"" Sep 6 00:05:44.341138 systemd[1]: Started cri-containerd-839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40.scope - libcontainer container 839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40. Sep 6 00:05:44.365608 containerd[1433]: time="2025-09-06T00:05:44.365518461Z" level=info msg="StartContainer for \"839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40\" returns successfully" Sep 6 00:05:44.633148 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871-rootfs.mount: Deactivated successfully. Sep 6 00:05:44.799629 kubelet[2493]: E0906 00:05:44.799461 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:44.802202 kubelet[2493]: E0906 00:05:44.802157 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:44.804296 containerd[1433]: time="2025-09-06T00:05:44.804256107Z" level=info msg="CreateContainer within sandbox \"539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:05:44.804595 kubelet[2493]: E0906 00:05:44.804317 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:44.835204 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount362973350.mount: Deactivated successfully. Sep 6 00:05:44.840443 kubelet[2493]: I0906 00:05:44.840378 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-f5tnw" podStartSLOduration=1.487023722 podStartE2EDuration="8.840272201s" podCreationTimestamp="2025-09-06 00:05:36 +0000 UTC" firstStartedPulling="2025-09-06 00:05:36.94626098 +0000 UTC m=+7.308674614" lastFinishedPulling="2025-09-06 00:05:44.299509459 +0000 UTC m=+14.661923093" observedRunningTime="2025-09-06 00:05:44.83969842 +0000 UTC m=+15.202112054" watchObservedRunningTime="2025-09-06 00:05:44.840272201 +0000 UTC m=+15.202685835" Sep 6 00:05:44.846632 containerd[1433]: time="2025-09-06T00:05:44.846566315Z" level=info msg="CreateContainer within sandbox \"539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a\"" Sep 6 00:05:44.847271 containerd[1433]: time="2025-09-06T00:05:44.847241835Z" level=info msg="StartContainer for \"cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a\"" Sep 6 00:05:44.886197 systemd[1]: Started cri-containerd-cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a.scope - libcontainer container cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a. Sep 6 00:05:44.923102 systemd[1]: cri-containerd-cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a.scope: Deactivated successfully. Sep 6 00:05:44.938199 containerd[1433]: time="2025-09-06T00:05:44.938140442Z" level=info msg="StartContainer for \"cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a\" returns successfully" Sep 6 00:05:45.035666 containerd[1433]: time="2025-09-06T00:05:45.035598712Z" level=info msg="shim disconnected" id=cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a namespace=k8s.io Sep 6 00:05:45.035666 containerd[1433]: time="2025-09-06T00:05:45.035660843Z" level=warning msg="cleaning up after shim disconnected" id=cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a namespace=k8s.io Sep 6 00:05:45.035666 containerd[1433]: time="2025-09-06T00:05:45.035669444Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:05:45.632294 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a-rootfs.mount: Deactivated successfully. Sep 6 00:05:45.805793 kubelet[2493]: E0906 00:05:45.805760 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:45.806203 kubelet[2493]: E0906 00:05:45.805802 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:45.810119 containerd[1433]: time="2025-09-06T00:05:45.810077854Z" level=info msg="CreateContainer within sandbox \"539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:05:45.830014 containerd[1433]: time="2025-09-06T00:05:45.829930157Z" level=info msg="CreateContainer within sandbox \"539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581\"" Sep 6 00:05:45.830739 containerd[1433]: time="2025-09-06T00:05:45.830574945Z" level=info msg="StartContainer for \"4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581\"" Sep 6 00:05:45.859105 systemd[1]: Started cri-containerd-4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581.scope - libcontainer container 4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581. Sep 6 00:05:45.879505 systemd[1]: cri-containerd-4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581.scope: Deactivated successfully. Sep 6 00:05:45.881672 containerd[1433]: time="2025-09-06T00:05:45.881621262Z" level=info msg="StartContainer for \"4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581\" returns successfully" Sep 6 00:05:45.902002 containerd[1433]: time="2025-09-06T00:05:45.901859630Z" level=info msg="shim disconnected" id=4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581 namespace=k8s.io Sep 6 00:05:45.902002 containerd[1433]: time="2025-09-06T00:05:45.901920560Z" level=warning msg="cleaning up after shim disconnected" id=4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581 namespace=k8s.io Sep 6 00:05:45.902002 containerd[1433]: time="2025-09-06T00:05:45.901929482Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:05:46.632356 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581-rootfs.mount: Deactivated successfully. Sep 6 00:05:46.813540 kubelet[2493]: E0906 00:05:46.813503 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:46.825708 containerd[1433]: time="2025-09-06T00:05:46.825663878Z" level=info msg="CreateContainer within sandbox \"539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:05:46.857146 containerd[1433]: time="2025-09-06T00:05:46.856998023Z" level=info msg="CreateContainer within sandbox \"539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32\"" Sep 6 00:05:46.858390 containerd[1433]: time="2025-09-06T00:05:46.857639606Z" level=info msg="StartContainer for \"defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32\"" Sep 6 00:05:46.893137 systemd[1]: Started cri-containerd-defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32.scope - libcontainer container defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32. Sep 6 00:05:46.918534 containerd[1433]: time="2025-09-06T00:05:46.918473841Z" level=info msg="StartContainer for \"defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32\" returns successfully" Sep 6 00:05:47.052050 kubelet[2493]: I0906 00:05:47.052012 2493 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 6 00:05:47.097071 systemd[1]: Created slice kubepods-burstable-pod6896931a_3130_4a23_951f_2859164682e1.slice - libcontainer container kubepods-burstable-pod6896931a_3130_4a23_951f_2859164682e1.slice. Sep 6 00:05:47.108804 systemd[1]: Created slice kubepods-burstable-pod403e5964_aa20_4542_9efb_7b3aebd4c6c8.slice - libcontainer container kubepods-burstable-pod403e5964_aa20_4542_9efb_7b3aebd4c6c8.slice. Sep 6 00:05:47.151871 kubelet[2493]: I0906 00:05:47.151741 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqq74\" (UniqueName: \"kubernetes.io/projected/6896931a-3130-4a23-951f-2859164682e1-kube-api-access-sqq74\") pod \"coredns-674b8bbfcf-96v6v\" (UID: \"6896931a-3130-4a23-951f-2859164682e1\") " pod="kube-system/coredns-674b8bbfcf-96v6v" Sep 6 00:05:47.151871 kubelet[2493]: I0906 00:05:47.151794 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtzt7\" (UniqueName: \"kubernetes.io/projected/403e5964-aa20-4542-9efb-7b3aebd4c6c8-kube-api-access-qtzt7\") pod \"coredns-674b8bbfcf-s5jgk\" (UID: \"403e5964-aa20-4542-9efb-7b3aebd4c6c8\") " pod="kube-system/coredns-674b8bbfcf-s5jgk" Sep 6 00:05:47.151871 kubelet[2493]: I0906 00:05:47.151835 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6896931a-3130-4a23-951f-2859164682e1-config-volume\") pod \"coredns-674b8bbfcf-96v6v\" (UID: \"6896931a-3130-4a23-951f-2859164682e1\") " pod="kube-system/coredns-674b8bbfcf-96v6v" Sep 6 00:05:47.151871 kubelet[2493]: I0906 00:05:47.151856 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/403e5964-aa20-4542-9efb-7b3aebd4c6c8-config-volume\") pod \"coredns-674b8bbfcf-s5jgk\" (UID: \"403e5964-aa20-4542-9efb-7b3aebd4c6c8\") " pod="kube-system/coredns-674b8bbfcf-s5jgk" Sep 6 00:05:47.404918 kubelet[2493]: E0906 00:05:47.404805 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:47.405629 containerd[1433]: time="2025-09-06T00:05:47.405589383Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-96v6v,Uid:6896931a-3130-4a23-951f-2859164682e1,Namespace:kube-system,Attempt:0,}" Sep 6 00:05:47.412749 kubelet[2493]: E0906 00:05:47.412263 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:47.413968 containerd[1433]: time="2025-09-06T00:05:47.412892819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s5jgk,Uid:403e5964-aa20-4542-9efb-7b3aebd4c6c8,Namespace:kube-system,Attempt:0,}" Sep 6 00:05:47.817884 kubelet[2493]: E0906 00:05:47.817854 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:47.834700 kubelet[2493]: I0906 00:05:47.834457 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-98fpz" podStartSLOduration=5.941201425 podStartE2EDuration="11.834442119s" podCreationTimestamp="2025-09-06 00:05:36 +0000 UTC" firstStartedPulling="2025-09-06 00:05:36.70890848 +0000 UTC m=+7.071322114" lastFinishedPulling="2025-09-06 00:05:42.602149174 +0000 UTC m=+12.964562808" observedRunningTime="2025-09-06 00:05:47.834012893 +0000 UTC m=+18.196426527" watchObservedRunningTime="2025-09-06 00:05:47.834442119 +0000 UTC m=+18.196855713" Sep 6 00:05:48.819631 kubelet[2493]: E0906 00:05:48.819588 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:48.969427 systemd-networkd[1383]: cilium_host: Link UP Sep 6 00:05:48.969541 systemd-networkd[1383]: cilium_net: Link UP Sep 6 00:05:48.971061 systemd-networkd[1383]: cilium_net: Gained carrier Sep 6 00:05:48.971241 systemd-networkd[1383]: cilium_host: Gained carrier Sep 6 00:05:48.972997 systemd-networkd[1383]: cilium_host: Gained IPv6LL Sep 6 00:05:49.048125 systemd-networkd[1383]: cilium_vxlan: Link UP Sep 6 00:05:49.048266 systemd-networkd[1383]: cilium_vxlan: Gained carrier Sep 6 00:05:49.299961 kernel: NET: Registered PF_ALG protocol family Sep 6 00:05:49.333105 systemd-networkd[1383]: cilium_net: Gained IPv6LL Sep 6 00:05:49.821971 kubelet[2493]: E0906 00:05:49.821908 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:49.872821 systemd-networkd[1383]: lxc_health: Link UP Sep 6 00:05:49.881162 systemd-networkd[1383]: lxc_health: Gained carrier Sep 6 00:05:49.963544 systemd-networkd[1383]: lxc823a174b9434: Link UP Sep 6 00:05:49.970008 kernel: eth0: renamed from tmpf4d43 Sep 6 00:05:49.978014 systemd-networkd[1383]: lxc823a174b9434: Gained carrier Sep 6 00:05:50.453132 systemd-networkd[1383]: cilium_vxlan: Gained IPv6LL Sep 6 00:05:50.460087 systemd-networkd[1383]: lxc208c26d86122: Link UP Sep 6 00:05:50.468032 kernel: eth0: renamed from tmpc61d7 Sep 6 00:05:50.475798 systemd-networkd[1383]: lxc208c26d86122: Gained carrier Sep 6 00:05:50.823584 kubelet[2493]: E0906 00:05:50.823525 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:51.221077 systemd-networkd[1383]: lxc_health: Gained IPv6LL Sep 6 00:05:51.477974 systemd-networkd[1383]: lxc823a174b9434: Gained IPv6LL Sep 6 00:05:52.053101 systemd-networkd[1383]: lxc208c26d86122: Gained IPv6LL Sep 6 00:05:52.817006 kubelet[2493]: I0906 00:05:52.816502 2493 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 6 00:05:52.817006 kubelet[2493]: E0906 00:05:52.816957 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:52.831967 kubelet[2493]: E0906 00:05:52.831464 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:53.443824 kernel: hrtimer: interrupt took 11335521 ns Sep 6 00:05:53.592122 containerd[1433]: time="2025-09-06T00:05:53.591667037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:05:53.592122 containerd[1433]: time="2025-09-06T00:05:53.591716843Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:05:53.592122 containerd[1433]: time="2025-09-06T00:05:53.591727364Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:53.592122 containerd[1433]: time="2025-09-06T00:05:53.591802893Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:53.598826 containerd[1433]: time="2025-09-06T00:05:53.598730420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:05:53.598826 containerd[1433]: time="2025-09-06T00:05:53.598794748Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:05:53.599689 containerd[1433]: time="2025-09-06T00:05:53.598809670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:53.599999 containerd[1433]: time="2025-09-06T00:05:53.599837149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:05:53.615102 systemd[1]: Started cri-containerd-c61d7872a5269010bc96d81fc7208366a833fa2e3ee72a111307f2fe3368716b.scope - libcontainer container c61d7872a5269010bc96d81fc7208366a833fa2e3ee72a111307f2fe3368716b. Sep 6 00:05:53.620197 systemd[1]: Started cri-containerd-f4d43888bf571c136d8aef8c391da9641f66a5f0a609fc0c83a8ee2eb5dec2a2.scope - libcontainer container f4d43888bf571c136d8aef8c391da9641f66a5f0a609fc0c83a8ee2eb5dec2a2. Sep 6 00:05:53.627249 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:05:53.631858 systemd-resolved[1309]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 6 00:05:53.646024 containerd[1433]: time="2025-09-06T00:05:53.645978605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-s5jgk,Uid:403e5964-aa20-4542-9efb-7b3aebd4c6c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"c61d7872a5269010bc96d81fc7208366a833fa2e3ee72a111307f2fe3368716b\"" Sep 6 00:05:53.646668 kubelet[2493]: E0906 00:05:53.646559 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:53.651496 containerd[1433]: time="2025-09-06T00:05:53.651462924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-96v6v,Uid:6896931a-3130-4a23-951f-2859164682e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4d43888bf571c136d8aef8c391da9641f66a5f0a609fc0c83a8ee2eb5dec2a2\"" Sep 6 00:05:53.652074 kubelet[2493]: E0906 00:05:53.652046 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:53.652757 containerd[1433]: time="2025-09-06T00:05:53.652724151Z" level=info msg="CreateContainer within sandbox \"c61d7872a5269010bc96d81fc7208366a833fa2e3ee72a111307f2fe3368716b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:05:53.657351 containerd[1433]: time="2025-09-06T00:05:53.657323047Z" level=info msg="CreateContainer within sandbox \"f4d43888bf571c136d8aef8c391da9641f66a5f0a609fc0c83a8ee2eb5dec2a2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 6 00:05:53.671043 containerd[1433]: time="2025-09-06T00:05:53.671002561Z" level=info msg="CreateContainer within sandbox \"c61d7872a5269010bc96d81fc7208366a833fa2e3ee72a111307f2fe3368716b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bfea3b5ec1b20a01a77b94c21eae64f828876f718773b147fea574b67b69c270\"" Sep 6 00:05:53.672076 containerd[1433]: time="2025-09-06T00:05:53.672046282Z" level=info msg="StartContainer for \"bfea3b5ec1b20a01a77b94c21eae64f828876f718773b147fea574b67b69c270\"" Sep 6 00:05:53.674537 containerd[1433]: time="2025-09-06T00:05:53.674503368Z" level=info msg="CreateContainer within sandbox \"f4d43888bf571c136d8aef8c391da9641f66a5f0a609fc0c83a8ee2eb5dec2a2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"db634758f75399c932fcc794e168b5ad453bc8c48ca484d6499039ef05055224\"" Sep 6 00:05:53.674966 containerd[1433]: time="2025-09-06T00:05:53.674927778Z" level=info msg="StartContainer for \"db634758f75399c932fcc794e168b5ad453bc8c48ca484d6499039ef05055224\"" Sep 6 00:05:53.697115 systemd[1]: Started cri-containerd-bfea3b5ec1b20a01a77b94c21eae64f828876f718773b147fea574b67b69c270.scope - libcontainer container bfea3b5ec1b20a01a77b94c21eae64f828876f718773b147fea574b67b69c270. Sep 6 00:05:53.702185 systemd[1]: Started cri-containerd-db634758f75399c932fcc794e168b5ad453bc8c48ca484d6499039ef05055224.scope - libcontainer container db634758f75399c932fcc794e168b5ad453bc8c48ca484d6499039ef05055224. Sep 6 00:05:53.730598 containerd[1433]: time="2025-09-06T00:05:53.730489131Z" level=info msg="StartContainer for \"db634758f75399c932fcc794e168b5ad453bc8c48ca484d6499039ef05055224\" returns successfully" Sep 6 00:05:53.733841 containerd[1433]: time="2025-09-06T00:05:53.733800957Z" level=info msg="StartContainer for \"bfea3b5ec1b20a01a77b94c21eae64f828876f718773b147fea574b67b69c270\" returns successfully" Sep 6 00:05:53.833550 kubelet[2493]: E0906 00:05:53.833504 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:53.835371 kubelet[2493]: E0906 00:05:53.835331 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:53.846339 kubelet[2493]: I0906 00:05:53.846266 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-s5jgk" podStartSLOduration=17.846252258 podStartE2EDuration="17.846252258s" podCreationTimestamp="2025-09-06 00:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:05:53.845381237 +0000 UTC m=+24.207794871" watchObservedRunningTime="2025-09-06 00:05:53.846252258 +0000 UTC m=+24.208665932" Sep 6 00:05:54.838108 kubelet[2493]: E0906 00:05:54.837397 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:54.838108 kubelet[2493]: E0906 00:05:54.837490 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:54.853606 kubelet[2493]: I0906 00:05:54.853193 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-96v6v" podStartSLOduration=18.85316972 podStartE2EDuration="18.85316972s" podCreationTimestamp="2025-09-06 00:05:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:05:53.857115884 +0000 UTC m=+24.219529518" watchObservedRunningTime="2025-09-06 00:05:54.85316972 +0000 UTC m=+25.215583354" Sep 6 00:05:55.844258 kubelet[2493]: E0906 00:05:55.842340 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:55.844258 kubelet[2493]: E0906 00:05:55.844120 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:05:59.385273 systemd[1]: Started sshd@7-10.0.0.93:22-10.0.0.1:53152.service - OpenSSH per-connection server daemon (10.0.0.1:53152). Sep 6 00:05:59.433603 sshd[3918]: Accepted publickey for core from 10.0.0.1 port 53152 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:05:59.435335 sshd[3918]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:05:59.440557 systemd-logind[1422]: New session 8 of user core. Sep 6 00:05:59.452176 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 6 00:05:59.581464 sshd[3918]: pam_unix(sshd:session): session closed for user core Sep 6 00:05:59.584706 systemd[1]: sshd@7-10.0.0.93:22-10.0.0.1:53152.service: Deactivated successfully. Sep 6 00:05:59.588697 systemd[1]: session-8.scope: Deactivated successfully. Sep 6 00:05:59.589990 systemd-logind[1422]: Session 8 logged out. Waiting for processes to exit. Sep 6 00:05:59.591441 systemd-logind[1422]: Removed session 8. Sep 6 00:06:04.592893 systemd[1]: Started sshd@8-10.0.0.93:22-10.0.0.1:41618.service - OpenSSH per-connection server daemon (10.0.0.1:41618). Sep 6 00:06:04.641001 sshd[3936]: Accepted publickey for core from 10.0.0.1 port 41618 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:04.642321 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:04.646704 systemd-logind[1422]: New session 9 of user core. Sep 6 00:06:04.660110 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 6 00:06:04.792823 sshd[3936]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:04.796405 systemd[1]: sshd@8-10.0.0.93:22-10.0.0.1:41618.service: Deactivated successfully. Sep 6 00:06:04.798333 systemd[1]: session-9.scope: Deactivated successfully. Sep 6 00:06:04.800519 systemd-logind[1422]: Session 9 logged out. Waiting for processes to exit. Sep 6 00:06:04.801698 systemd-logind[1422]: Removed session 9. Sep 6 00:06:09.804089 systemd[1]: Started sshd@9-10.0.0.93:22-10.0.0.1:41628.service - OpenSSH per-connection server daemon (10.0.0.1:41628). Sep 6 00:06:09.846287 sshd[3953]: Accepted publickey for core from 10.0.0.1 port 41628 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:09.848347 sshd[3953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:09.855073 systemd-logind[1422]: New session 10 of user core. Sep 6 00:06:09.862238 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 6 00:06:09.983168 sshd[3953]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:09.986400 systemd-logind[1422]: Session 10 logged out. Waiting for processes to exit. Sep 6 00:06:09.986829 systemd[1]: sshd@9-10.0.0.93:22-10.0.0.1:41628.service: Deactivated successfully. Sep 6 00:06:09.988461 systemd[1]: session-10.scope: Deactivated successfully. Sep 6 00:06:09.989250 systemd-logind[1422]: Removed session 10. Sep 6 00:06:14.997543 systemd[1]: Started sshd@10-10.0.0.93:22-10.0.0.1:57434.service - OpenSSH per-connection server daemon (10.0.0.1:57434). Sep 6 00:06:15.036068 sshd[3968]: Accepted publickey for core from 10.0.0.1 port 57434 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:15.036592 sshd[3968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:15.042644 systemd-logind[1422]: New session 11 of user core. Sep 6 00:06:15.047344 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 6 00:06:15.167340 sshd[3968]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:15.175805 systemd[1]: sshd@10-10.0.0.93:22-10.0.0.1:57434.service: Deactivated successfully. Sep 6 00:06:15.177503 systemd[1]: session-11.scope: Deactivated successfully. Sep 6 00:06:15.179373 systemd-logind[1422]: Session 11 logged out. Waiting for processes to exit. Sep 6 00:06:15.190251 systemd[1]: Started sshd@11-10.0.0.93:22-10.0.0.1:57436.service - OpenSSH per-connection server daemon (10.0.0.1:57436). Sep 6 00:06:15.194793 systemd-logind[1422]: Removed session 11. Sep 6 00:06:15.224276 sshd[3984]: Accepted publickey for core from 10.0.0.1 port 57436 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:15.225613 sshd[3984]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:15.232253 systemd-logind[1422]: New session 12 of user core. Sep 6 00:06:15.252193 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 6 00:06:15.423115 sshd[3984]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:15.430784 systemd[1]: sshd@11-10.0.0.93:22-10.0.0.1:57436.service: Deactivated successfully. Sep 6 00:06:15.435002 systemd[1]: session-12.scope: Deactivated successfully. Sep 6 00:06:15.441803 systemd-logind[1422]: Session 12 logged out. Waiting for processes to exit. Sep 6 00:06:15.459574 systemd[1]: Started sshd@12-10.0.0.93:22-10.0.0.1:57442.service - OpenSSH per-connection server daemon (10.0.0.1:57442). Sep 6 00:06:15.461294 systemd-logind[1422]: Removed session 12. Sep 6 00:06:15.498484 sshd[3996]: Accepted publickey for core from 10.0.0.1 port 57442 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:15.499973 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:15.504673 systemd-logind[1422]: New session 13 of user core. Sep 6 00:06:15.520182 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 6 00:06:15.632282 sshd[3996]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:15.635718 systemd[1]: sshd@12-10.0.0.93:22-10.0.0.1:57442.service: Deactivated successfully. Sep 6 00:06:15.637475 systemd[1]: session-13.scope: Deactivated successfully. Sep 6 00:06:15.639714 systemd-logind[1422]: Session 13 logged out. Waiting for processes to exit. Sep 6 00:06:15.640560 systemd-logind[1422]: Removed session 13. Sep 6 00:06:20.647709 systemd[1]: Started sshd@13-10.0.0.93:22-10.0.0.1:45748.service - OpenSSH per-connection server daemon (10.0.0.1:45748). Sep 6 00:06:20.685555 sshd[4010]: Accepted publickey for core from 10.0.0.1 port 45748 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:20.686082 sshd[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:20.690730 systemd-logind[1422]: New session 14 of user core. Sep 6 00:06:20.702136 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 6 00:06:20.834188 sshd[4010]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:20.837478 systemd[1]: sshd@13-10.0.0.93:22-10.0.0.1:45748.service: Deactivated successfully. Sep 6 00:06:20.840681 systemd[1]: session-14.scope: Deactivated successfully. Sep 6 00:06:20.842230 systemd-logind[1422]: Session 14 logged out. Waiting for processes to exit. Sep 6 00:06:20.843347 systemd-logind[1422]: Removed session 14. Sep 6 00:06:25.844605 systemd[1]: Started sshd@14-10.0.0.93:22-10.0.0.1:45752.service - OpenSSH per-connection server daemon (10.0.0.1:45752). Sep 6 00:06:25.901102 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 45752 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:25.902805 sshd[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:25.907819 systemd-logind[1422]: New session 15 of user core. Sep 6 00:06:25.922157 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 6 00:06:26.056584 sshd[4024]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:26.080927 systemd[1]: sshd@14-10.0.0.93:22-10.0.0.1:45752.service: Deactivated successfully. Sep 6 00:06:26.082716 systemd[1]: session-15.scope: Deactivated successfully. Sep 6 00:06:26.084299 systemd-logind[1422]: Session 15 logged out. Waiting for processes to exit. Sep 6 00:06:26.093289 systemd[1]: Started sshd@15-10.0.0.93:22-10.0.0.1:45754.service - OpenSSH per-connection server daemon (10.0.0.1:45754). Sep 6 00:06:26.094517 systemd-logind[1422]: Removed session 15. Sep 6 00:06:26.126527 sshd[4038]: Accepted publickey for core from 10.0.0.1 port 45754 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:26.128629 sshd[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:26.133140 systemd-logind[1422]: New session 16 of user core. Sep 6 00:06:26.147107 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 6 00:06:26.364102 sshd[4038]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:26.377544 systemd[1]: sshd@15-10.0.0.93:22-10.0.0.1:45754.service: Deactivated successfully. Sep 6 00:06:26.379635 systemd[1]: session-16.scope: Deactivated successfully. Sep 6 00:06:26.381483 systemd-logind[1422]: Session 16 logged out. Waiting for processes to exit. Sep 6 00:06:26.390199 systemd[1]: Started sshd@16-10.0.0.93:22-10.0.0.1:45764.service - OpenSSH per-connection server daemon (10.0.0.1:45764). Sep 6 00:06:26.391835 systemd-logind[1422]: Removed session 16. Sep 6 00:06:26.429972 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 45764 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:26.431289 sshd[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:26.434765 systemd-logind[1422]: New session 17 of user core. Sep 6 00:06:26.441080 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 6 00:06:27.282098 sshd[4050]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:27.294973 systemd[1]: sshd@16-10.0.0.93:22-10.0.0.1:45764.service: Deactivated successfully. Sep 6 00:06:27.296531 systemd[1]: session-17.scope: Deactivated successfully. Sep 6 00:06:27.298129 systemd-logind[1422]: Session 17 logged out. Waiting for processes to exit. Sep 6 00:06:27.301849 systemd[1]: Started sshd@17-10.0.0.93:22-10.0.0.1:45774.service - OpenSSH per-connection server daemon (10.0.0.1:45774). Sep 6 00:06:27.307576 systemd-logind[1422]: Removed session 17. Sep 6 00:06:27.347078 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 45774 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:27.348614 sshd[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:27.353742 systemd-logind[1422]: New session 18 of user core. Sep 6 00:06:27.366150 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 6 00:06:27.593229 sshd[4069]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:27.603013 systemd[1]: sshd@17-10.0.0.93:22-10.0.0.1:45774.service: Deactivated successfully. Sep 6 00:06:27.605360 systemd[1]: session-18.scope: Deactivated successfully. Sep 6 00:06:27.610644 systemd-logind[1422]: Session 18 logged out. Waiting for processes to exit. Sep 6 00:06:27.615335 systemd[1]: Started sshd@18-10.0.0.93:22-10.0.0.1:45788.service - OpenSSH per-connection server daemon (10.0.0.1:45788). Sep 6 00:06:27.616854 systemd-logind[1422]: Removed session 18. Sep 6 00:06:27.648588 sshd[4082]: Accepted publickey for core from 10.0.0.1 port 45788 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:27.651436 sshd[4082]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:27.656688 systemd-logind[1422]: New session 19 of user core. Sep 6 00:06:27.665116 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 6 00:06:27.791242 sshd[4082]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:27.797908 systemd-logind[1422]: Session 19 logged out. Waiting for processes to exit. Sep 6 00:06:27.798562 systemd[1]: sshd@18-10.0.0.93:22-10.0.0.1:45788.service: Deactivated successfully. Sep 6 00:06:27.801502 systemd[1]: session-19.scope: Deactivated successfully. Sep 6 00:06:27.802928 systemd-logind[1422]: Removed session 19. Sep 6 00:06:32.803770 systemd[1]: Started sshd@19-10.0.0.93:22-10.0.0.1:40744.service - OpenSSH per-connection server daemon (10.0.0.1:40744). Sep 6 00:06:32.859709 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 40744 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:32.861536 sshd[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:32.869986 systemd-logind[1422]: New session 20 of user core. Sep 6 00:06:32.886202 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 6 00:06:33.014524 sshd[4101]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:33.017606 systemd[1]: sshd@19-10.0.0.93:22-10.0.0.1:40744.service: Deactivated successfully. Sep 6 00:06:33.020753 systemd[1]: session-20.scope: Deactivated successfully. Sep 6 00:06:33.022401 systemd-logind[1422]: Session 20 logged out. Waiting for processes to exit. Sep 6 00:06:33.023360 systemd-logind[1422]: Removed session 20. Sep 6 00:06:38.028586 systemd[1]: Started sshd@20-10.0.0.93:22-10.0.0.1:40752.service - OpenSSH per-connection server daemon (10.0.0.1:40752). Sep 6 00:06:38.063654 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 40752 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:38.065001 sshd[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:38.069175 systemd-logind[1422]: New session 21 of user core. Sep 6 00:06:38.075135 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 6 00:06:38.191431 sshd[4118]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:38.207782 systemd[1]: sshd@20-10.0.0.93:22-10.0.0.1:40752.service: Deactivated successfully. Sep 6 00:06:38.210600 systemd[1]: session-21.scope: Deactivated successfully. Sep 6 00:06:38.213519 systemd-logind[1422]: Session 21 logged out. Waiting for processes to exit. Sep 6 00:06:38.223351 systemd[1]: Started sshd@21-10.0.0.93:22-10.0.0.1:40756.service - OpenSSH per-connection server daemon (10.0.0.1:40756). Sep 6 00:06:38.228403 systemd-logind[1422]: Removed session 21. Sep 6 00:06:38.258658 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 40756 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:38.260553 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:38.265569 systemd-logind[1422]: New session 22 of user core. Sep 6 00:06:38.273140 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 6 00:06:41.312999 containerd[1433]: time="2025-09-06T00:06:41.304701278Z" level=info msg="StopContainer for \"839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40\" with timeout 30 (s)" Sep 6 00:06:41.312999 containerd[1433]: time="2025-09-06T00:06:41.305239674Z" level=info msg="Stop container \"839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40\" with signal terminated" Sep 6 00:06:41.333143 systemd[1]: cri-containerd-839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40.scope: Deactivated successfully. Sep 6 00:06:41.345703 containerd[1433]: time="2025-09-06T00:06:41.345642916Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 6 00:06:41.354728 containerd[1433]: time="2025-09-06T00:06:41.354511086Z" level=info msg="StopContainer for \"defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32\" with timeout 2 (s)" Sep 6 00:06:41.355043 containerd[1433]: time="2025-09-06T00:06:41.355009162Z" level=info msg="Stop container \"defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32\" with signal terminated" Sep 6 00:06:41.362442 systemd-networkd[1383]: lxc_health: Link DOWN Sep 6 00:06:41.362451 systemd-networkd[1383]: lxc_health: Lost carrier Sep 6 00:06:41.368871 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40-rootfs.mount: Deactivated successfully. Sep 6 00:06:41.377555 containerd[1433]: time="2025-09-06T00:06:41.377391746Z" level=info msg="shim disconnected" id=839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40 namespace=k8s.io Sep 6 00:06:41.377555 containerd[1433]: time="2025-09-06T00:06:41.377464386Z" level=warning msg="cleaning up after shim disconnected" id=839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40 namespace=k8s.io Sep 6 00:06:41.377555 containerd[1433]: time="2025-09-06T00:06:41.377474146Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:41.388959 systemd[1]: cri-containerd-defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32.scope: Deactivated successfully. Sep 6 00:06:41.391249 systemd[1]: cri-containerd-defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32.scope: Consumed 6.225s CPU time. Sep 6 00:06:41.408451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32-rootfs.mount: Deactivated successfully. Sep 6 00:06:41.413792 containerd[1433]: time="2025-09-06T00:06:41.413728701Z" level=info msg="shim disconnected" id=defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32 namespace=k8s.io Sep 6 00:06:41.413792 containerd[1433]: time="2025-09-06T00:06:41.413816140Z" level=warning msg="cleaning up after shim disconnected" id=defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32 namespace=k8s.io Sep 6 00:06:41.414023 containerd[1433]: time="2025-09-06T00:06:41.413827540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:41.421051 containerd[1433]: time="2025-09-06T00:06:41.421004243Z" level=info msg="StopContainer for \"839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40\" returns successfully" Sep 6 00:06:41.421816 containerd[1433]: time="2025-09-06T00:06:41.421790717Z" level=info msg="StopPodSandbox for \"a2111fd4b0fafc85197561ee17e7cd0593503c29c8ffb413916ef1c68a184677\"" Sep 6 00:06:41.421978 containerd[1433]: time="2025-09-06T00:06:41.421956996Z" level=info msg="Container to stop \"839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:41.424867 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2111fd4b0fafc85197561ee17e7cd0593503c29c8ffb413916ef1c68a184677-shm.mount: Deactivated successfully. Sep 6 00:06:41.434001 containerd[1433]: time="2025-09-06T00:06:41.433959781Z" level=info msg="StopContainer for \"defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32\" returns successfully" Sep 6 00:06:41.434526 containerd[1433]: time="2025-09-06T00:06:41.434504577Z" level=info msg="StopPodSandbox for \"539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3\"" Sep 6 00:06:41.434584 containerd[1433]: time="2025-09-06T00:06:41.434536937Z" level=info msg="Container to stop \"d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:41.434584 containerd[1433]: time="2025-09-06T00:06:41.434548417Z" level=info msg="Container to stop \"56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:41.434584 containerd[1433]: time="2025-09-06T00:06:41.434559217Z" level=info msg="Container to stop \"4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:41.434584 containerd[1433]: time="2025-09-06T00:06:41.434568617Z" level=info msg="Container to stop \"cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:41.434584 containerd[1433]: time="2025-09-06T00:06:41.434578817Z" level=info msg="Container to stop \"defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 6 00:06:41.437075 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3-shm.mount: Deactivated successfully. Sep 6 00:06:41.439601 systemd[1]: cri-containerd-a2111fd4b0fafc85197561ee17e7cd0593503c29c8ffb413916ef1c68a184677.scope: Deactivated successfully. Sep 6 00:06:41.460320 systemd[1]: cri-containerd-539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3.scope: Deactivated successfully. Sep 6 00:06:41.478226 containerd[1433]: time="2025-09-06T00:06:41.477998595Z" level=info msg="shim disconnected" id=a2111fd4b0fafc85197561ee17e7cd0593503c29c8ffb413916ef1c68a184677 namespace=k8s.io Sep 6 00:06:41.478226 containerd[1433]: time="2025-09-06T00:06:41.478057355Z" level=warning msg="cleaning up after shim disconnected" id=a2111fd4b0fafc85197561ee17e7cd0593503c29c8ffb413916ef1c68a184677 namespace=k8s.io Sep 6 00:06:41.478226 containerd[1433]: time="2025-09-06T00:06:41.478066595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:41.488703 containerd[1433]: time="2025-09-06T00:06:41.488641711Z" level=info msg="shim disconnected" id=539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3 namespace=k8s.io Sep 6 00:06:41.488703 containerd[1433]: time="2025-09-06T00:06:41.488701391Z" level=warning msg="cleaning up after shim disconnected" id=539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3 namespace=k8s.io Sep 6 00:06:41.488703 containerd[1433]: time="2025-09-06T00:06:41.488710511Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:41.492704 containerd[1433]: time="2025-09-06T00:06:41.492481961Z" level=info msg="TearDown network for sandbox \"a2111fd4b0fafc85197561ee17e7cd0593503c29c8ffb413916ef1c68a184677\" successfully" Sep 6 00:06:41.492704 containerd[1433]: time="2025-09-06T00:06:41.492519921Z" level=info msg="StopPodSandbox for \"a2111fd4b0fafc85197561ee17e7cd0593503c29c8ffb413916ef1c68a184677\" returns successfully" Sep 6 00:06:41.505046 containerd[1433]: time="2025-09-06T00:06:41.504981583Z" level=info msg="TearDown network for sandbox \"539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3\" successfully" Sep 6 00:06:41.517970 containerd[1433]: time="2025-09-06T00:06:41.517538044Z" level=info msg="StopPodSandbox for \"539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3\" returns successfully" Sep 6 00:06:41.529047 kubelet[2493]: I0906 00:06:41.529011 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jr4hc\" (UniqueName: \"kubernetes.io/projected/6bd01021-bacd-4264-afa2-0e297c8a13db-kube-api-access-jr4hc\") pod \"6bd01021-bacd-4264-afa2-0e297c8a13db\" (UID: \"6bd01021-bacd-4264-afa2-0e297c8a13db\") " Sep 6 00:06:41.529430 kubelet[2493]: I0906 00:06:41.529079 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bd01021-bacd-4264-afa2-0e297c8a13db-cilium-config-path\") pod \"6bd01021-bacd-4264-afa2-0e297c8a13db\" (UID: \"6bd01021-bacd-4264-afa2-0e297c8a13db\") " Sep 6 00:06:41.534638 kubelet[2493]: I0906 00:06:41.534589 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6bd01021-bacd-4264-afa2-0e297c8a13db-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6bd01021-bacd-4264-afa2-0e297c8a13db" (UID: "6bd01021-bacd-4264-afa2-0e297c8a13db"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:06:41.535452 kubelet[2493]: I0906 00:06:41.535400 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6bd01021-bacd-4264-afa2-0e297c8a13db-kube-api-access-jr4hc" (OuterVolumeSpecName: "kube-api-access-jr4hc") pod "6bd01021-bacd-4264-afa2-0e297c8a13db" (UID: "6bd01021-bacd-4264-afa2-0e297c8a13db"). InnerVolumeSpecName "kube-api-access-jr4hc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:06:41.629500 kubelet[2493]: I0906 00:06:41.629375 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-etc-cni-netd\") pod \"82e02273-117b-4b61-8c77-b8cc92f40c43\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " Sep 6 00:06:41.629500 kubelet[2493]: I0906 00:06:41.629417 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-host-proc-sys-net\") pod \"82e02273-117b-4b61-8c77-b8cc92f40c43\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " Sep 6 00:06:41.629500 kubelet[2493]: I0906 00:06:41.629440 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-host-proc-sys-kernel\") pod \"82e02273-117b-4b61-8c77-b8cc92f40c43\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " Sep 6 00:06:41.629500 kubelet[2493]: I0906 00:06:41.629465 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82e02273-117b-4b61-8c77-b8cc92f40c43-clustermesh-secrets\") pod \"82e02273-117b-4b61-8c77-b8cc92f40c43\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " Sep 6 00:06:41.629500 kubelet[2493]: I0906 00:06:41.629481 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82e02273-117b-4b61-8c77-b8cc92f40c43-cilium-config-path\") pod \"82e02273-117b-4b61-8c77-b8cc92f40c43\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " Sep 6 00:06:41.629500 kubelet[2493]: I0906 00:06:41.629502 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85lfb\" (UniqueName: \"kubernetes.io/projected/82e02273-117b-4b61-8c77-b8cc92f40c43-kube-api-access-85lfb\") pod \"82e02273-117b-4b61-8c77-b8cc92f40c43\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " Sep 6 00:06:41.629724 kubelet[2493]: I0906 00:06:41.629519 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-bpf-maps\") pod \"82e02273-117b-4b61-8c77-b8cc92f40c43\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " Sep 6 00:06:41.629724 kubelet[2493]: I0906 00:06:41.629533 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-xtables-lock\") pod \"82e02273-117b-4b61-8c77-b8cc92f40c43\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " Sep 6 00:06:41.629724 kubelet[2493]: I0906 00:06:41.629548 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-lib-modules\") pod \"82e02273-117b-4b61-8c77-b8cc92f40c43\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " Sep 6 00:06:41.629724 kubelet[2493]: I0906 00:06:41.629568 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82e02273-117b-4b61-8c77-b8cc92f40c43-hubble-tls\") pod \"82e02273-117b-4b61-8c77-b8cc92f40c43\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " Sep 6 00:06:41.629724 kubelet[2493]: I0906 00:06:41.629580 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-cilium-run\") pod \"82e02273-117b-4b61-8c77-b8cc92f40c43\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " Sep 6 00:06:41.629724 kubelet[2493]: I0906 00:06:41.629596 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-hostproc\") pod \"82e02273-117b-4b61-8c77-b8cc92f40c43\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " Sep 6 00:06:41.629866 kubelet[2493]: I0906 00:06:41.629609 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-cilium-cgroup\") pod \"82e02273-117b-4b61-8c77-b8cc92f40c43\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " Sep 6 00:06:41.629866 kubelet[2493]: I0906 00:06:41.629621 2493 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-cni-path\") pod \"82e02273-117b-4b61-8c77-b8cc92f40c43\" (UID: \"82e02273-117b-4b61-8c77-b8cc92f40c43\") " Sep 6 00:06:41.629866 kubelet[2493]: I0906 00:06:41.629660 2493 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6bd01021-bacd-4264-afa2-0e297c8a13db-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.629866 kubelet[2493]: I0906 00:06:41.629670 2493 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jr4hc\" (UniqueName: \"kubernetes.io/projected/6bd01021-bacd-4264-afa2-0e297c8a13db-kube-api-access-jr4hc\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.629866 kubelet[2493]: I0906 00:06:41.629727 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-cni-path" (OuterVolumeSpecName: "cni-path") pod "82e02273-117b-4b61-8c77-b8cc92f40c43" (UID: "82e02273-117b-4b61-8c77-b8cc92f40c43"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:06:41.629866 kubelet[2493]: I0906 00:06:41.629760 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "82e02273-117b-4b61-8c77-b8cc92f40c43" (UID: "82e02273-117b-4b61-8c77-b8cc92f40c43"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:06:41.630023 kubelet[2493]: I0906 00:06:41.629773 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "82e02273-117b-4b61-8c77-b8cc92f40c43" (UID: "82e02273-117b-4b61-8c77-b8cc92f40c43"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:06:41.630023 kubelet[2493]: I0906 00:06:41.629787 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "82e02273-117b-4b61-8c77-b8cc92f40c43" (UID: "82e02273-117b-4b61-8c77-b8cc92f40c43"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:06:41.631966 kubelet[2493]: I0906 00:06:41.630118 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "82e02273-117b-4b61-8c77-b8cc92f40c43" (UID: "82e02273-117b-4b61-8c77-b8cc92f40c43"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:06:41.631966 kubelet[2493]: I0906 00:06:41.630151 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "82e02273-117b-4b61-8c77-b8cc92f40c43" (UID: "82e02273-117b-4b61-8c77-b8cc92f40c43"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:06:41.631966 kubelet[2493]: I0906 00:06:41.630850 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "82e02273-117b-4b61-8c77-b8cc92f40c43" (UID: "82e02273-117b-4b61-8c77-b8cc92f40c43"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:06:41.632232 kubelet[2493]: I0906 00:06:41.632206 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/82e02273-117b-4b61-8c77-b8cc92f40c43-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "82e02273-117b-4b61-8c77-b8cc92f40c43" (UID: "82e02273-117b-4b61-8c77-b8cc92f40c43"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 6 00:06:41.632322 kubelet[2493]: I0906 00:06:41.632310 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "82e02273-117b-4b61-8c77-b8cc92f40c43" (UID: "82e02273-117b-4b61-8c77-b8cc92f40c43"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:06:41.632416 kubelet[2493]: I0906 00:06:41.632399 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-hostproc" (OuterVolumeSpecName: "hostproc") pod "82e02273-117b-4b61-8c77-b8cc92f40c43" (UID: "82e02273-117b-4b61-8c77-b8cc92f40c43"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:06:41.632515 kubelet[2493]: I0906 00:06:41.632498 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "82e02273-117b-4b61-8c77-b8cc92f40c43" (UID: "82e02273-117b-4b61-8c77-b8cc92f40c43"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 6 00:06:41.632678 kubelet[2493]: I0906 00:06:41.632655 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e02273-117b-4b61-8c77-b8cc92f40c43-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "82e02273-117b-4b61-8c77-b8cc92f40c43" (UID: "82e02273-117b-4b61-8c77-b8cc92f40c43"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 6 00:06:41.633099 kubelet[2493]: I0906 00:06:41.633070 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82e02273-117b-4b61-8c77-b8cc92f40c43-kube-api-access-85lfb" (OuterVolumeSpecName: "kube-api-access-85lfb") pod "82e02273-117b-4b61-8c77-b8cc92f40c43" (UID: "82e02273-117b-4b61-8c77-b8cc92f40c43"). InnerVolumeSpecName "kube-api-access-85lfb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:06:41.633144 kubelet[2493]: I0906 00:06:41.633124 2493 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82e02273-117b-4b61-8c77-b8cc92f40c43-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "82e02273-117b-4b61-8c77-b8cc92f40c43" (UID: "82e02273-117b-4b61-8c77-b8cc92f40c43"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 6 00:06:41.729841 kubelet[2493]: I0906 00:06:41.729799 2493 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.729841 kubelet[2493]: I0906 00:06:41.729828 2493 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.729841 kubelet[2493]: I0906 00:06:41.729845 2493 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.729841 kubelet[2493]: I0906 00:06:41.729855 2493 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/82e02273-117b-4b61-8c77-b8cc92f40c43-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.730068 kubelet[2493]: I0906 00:06:41.729864 2493 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/82e02273-117b-4b61-8c77-b8cc92f40c43-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.730068 kubelet[2493]: I0906 00:06:41.729872 2493 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-85lfb\" (UniqueName: \"kubernetes.io/projected/82e02273-117b-4b61-8c77-b8cc92f40c43-kube-api-access-85lfb\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.730068 kubelet[2493]: I0906 00:06:41.729881 2493 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.730068 kubelet[2493]: I0906 00:06:41.729889 2493 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.730068 kubelet[2493]: I0906 00:06:41.729897 2493 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.730068 kubelet[2493]: I0906 00:06:41.729905 2493 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/82e02273-117b-4b61-8c77-b8cc92f40c43-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.730068 kubelet[2493]: I0906 00:06:41.729912 2493 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.730068 kubelet[2493]: I0906 00:06:41.729920 2493 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.730238 kubelet[2493]: I0906 00:06:41.729931 2493 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.730238 kubelet[2493]: I0906 00:06:41.729959 2493 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/82e02273-117b-4b61-8c77-b8cc92f40c43-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 6 00:06:41.741307 systemd[1]: Removed slice kubepods-besteffort-pod6bd01021_bacd_4264_afa2_0e297c8a13db.slice - libcontainer container kubepods-besteffort-pod6bd01021_bacd_4264_afa2_0e297c8a13db.slice. Sep 6 00:06:41.743331 systemd[1]: Removed slice kubepods-burstable-pod82e02273_117b_4b61_8c77_b8cc92f40c43.slice - libcontainer container kubepods-burstable-pod82e02273_117b_4b61_8c77_b8cc92f40c43.slice. Sep 6 00:06:41.743431 systemd[1]: kubepods-burstable-pod82e02273_117b_4b61_8c77_b8cc92f40c43.slice: Consumed 6.301s CPU time. Sep 6 00:06:42.001184 kubelet[2493]: I0906 00:06:42.001137 2493 scope.go:117] "RemoveContainer" containerID="defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32" Sep 6 00:06:42.004472 containerd[1433]: time="2025-09-06T00:06:42.004414379Z" level=info msg="RemoveContainer for \"defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32\"" Sep 6 00:06:42.010971 containerd[1433]: time="2025-09-06T00:06:42.010477179Z" level=info msg="RemoveContainer for \"defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32\" returns successfully" Sep 6 00:06:42.011067 kubelet[2493]: I0906 00:06:42.010798 2493 scope.go:117] "RemoveContainer" containerID="4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581" Sep 6 00:06:42.012575 containerd[1433]: time="2025-09-06T00:06:42.012549286Z" level=info msg="RemoveContainer for \"4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581\"" Sep 6 00:06:42.020926 containerd[1433]: time="2025-09-06T00:06:42.020865911Z" level=info msg="RemoveContainer for \"4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581\" returns successfully" Sep 6 00:06:42.021125 kubelet[2493]: I0906 00:06:42.021098 2493 scope.go:117] "RemoveContainer" containerID="cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a" Sep 6 00:06:42.022430 containerd[1433]: time="2025-09-06T00:06:42.022350101Z" level=info msg="RemoveContainer for \"cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a\"" Sep 6 00:06:42.025127 containerd[1433]: time="2025-09-06T00:06:42.024784725Z" level=info msg="RemoveContainer for \"cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a\" returns successfully" Sep 6 00:06:42.025208 kubelet[2493]: I0906 00:06:42.025153 2493 scope.go:117] "RemoveContainer" containerID="56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871" Sep 6 00:06:42.026581 containerd[1433]: time="2025-09-06T00:06:42.026497554Z" level=info msg="RemoveContainer for \"56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871\"" Sep 6 00:06:42.030288 containerd[1433]: time="2025-09-06T00:06:42.030255409Z" level=info msg="RemoveContainer for \"56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871\" returns successfully" Sep 6 00:06:42.030664 kubelet[2493]: I0906 00:06:42.030504 2493 scope.go:117] "RemoveContainer" containerID="d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4" Sep 6 00:06:42.032027 containerd[1433]: time="2025-09-06T00:06:42.031968998Z" level=info msg="RemoveContainer for \"d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4\"" Sep 6 00:06:42.034682 containerd[1433]: time="2025-09-06T00:06:42.034613541Z" level=info msg="RemoveContainer for \"d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4\" returns successfully" Sep 6 00:06:42.034964 kubelet[2493]: I0906 00:06:42.034855 2493 scope.go:117] "RemoveContainer" containerID="defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32" Sep 6 00:06:42.036200 containerd[1433]: time="2025-09-06T00:06:42.035075218Z" level=error msg="ContainerStatus for \"defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32\": not found" Sep 6 00:06:42.041475 kubelet[2493]: E0906 00:06:42.041444 2493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32\": not found" containerID="defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32" Sep 6 00:06:42.041538 kubelet[2493]: I0906 00:06:42.041484 2493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32"} err="failed to get container status \"defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32\": rpc error: code = NotFound desc = an error occurred when try to find container \"defc2c1df41c3f33e93d3f5d6d4aa806454f30acdd29c058b81ce1c48a55ec32\": not found" Sep 6 00:06:42.041538 kubelet[2493]: I0906 00:06:42.041518 2493 scope.go:117] "RemoveContainer" containerID="4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581" Sep 6 00:06:42.041793 containerd[1433]: time="2025-09-06T00:06:42.041745734Z" level=error msg="ContainerStatus for \"4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581\": not found" Sep 6 00:06:42.041916 kubelet[2493]: E0906 00:06:42.041895 2493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581\": not found" containerID="4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581" Sep 6 00:06:42.041969 kubelet[2493]: I0906 00:06:42.041924 2493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581"} err="failed to get container status \"4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581\": rpc error: code = NotFound desc = an error occurred when try to find container \"4e41975211b0a3d7a35b94d83c5d6ce434e1bf8291f66ec92d2ded1af03be581\": not found" Sep 6 00:06:42.041969 kubelet[2493]: I0906 00:06:42.041961 2493 scope.go:117] "RemoveContainer" containerID="cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a" Sep 6 00:06:42.042192 containerd[1433]: time="2025-09-06T00:06:42.042158811Z" level=error msg="ContainerStatus for \"cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a\": not found" Sep 6 00:06:42.042319 kubelet[2493]: E0906 00:06:42.042299 2493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a\": not found" containerID="cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a" Sep 6 00:06:42.042363 kubelet[2493]: I0906 00:06:42.042322 2493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a"} err="failed to get container status \"cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb542fef3a9971633d00c7ed47f2642bb8cc9b5056671232568f96f52dac218a\": not found" Sep 6 00:06:42.042363 kubelet[2493]: I0906 00:06:42.042338 2493 scope.go:117] "RemoveContainer" containerID="56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871" Sep 6 00:06:42.042548 containerd[1433]: time="2025-09-06T00:06:42.042505449Z" level=error msg="ContainerStatus for \"56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871\": not found" Sep 6 00:06:42.042756 kubelet[2493]: E0906 00:06:42.042655 2493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871\": not found" containerID="56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871" Sep 6 00:06:42.042756 kubelet[2493]: I0906 00:06:42.042680 2493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871"} err="failed to get container status \"56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871\": rpc error: code = NotFound desc = an error occurred when try to find container \"56f1e92e25299f7afcbbfd4abdff37963a35cc61bd9df2242fd5688c8091b871\": not found" Sep 6 00:06:42.042756 kubelet[2493]: I0906 00:06:42.042695 2493 scope.go:117] "RemoveContainer" containerID="d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4" Sep 6 00:06:42.042969 containerd[1433]: time="2025-09-06T00:06:42.042867047Z" level=error msg="ContainerStatus for \"d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4\": not found" Sep 6 00:06:42.043028 kubelet[2493]: E0906 00:06:42.043002 2493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4\": not found" containerID="d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4" Sep 6 00:06:42.043061 kubelet[2493]: I0906 00:06:42.043031 2493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4"} err="failed to get container status \"d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5c3186c524a89b88f6884b6b0552c6a0907d0167b3b15635976aa456ca5a0a4\": not found" Sep 6 00:06:42.043061 kubelet[2493]: I0906 00:06:42.043048 2493 scope.go:117] "RemoveContainer" containerID="839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40" Sep 6 00:06:42.044246 containerd[1433]: time="2025-09-06T00:06:42.044212198Z" level=info msg="RemoveContainer for \"839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40\"" Sep 6 00:06:42.047369 containerd[1433]: time="2025-09-06T00:06:42.047322897Z" level=info msg="RemoveContainer for \"839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40\" returns successfully" Sep 6 00:06:42.047602 kubelet[2493]: I0906 00:06:42.047570 2493 scope.go:117] "RemoveContainer" containerID="839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40" Sep 6 00:06:42.048066 containerd[1433]: time="2025-09-06T00:06:42.048015973Z" level=error msg="ContainerStatus for \"839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40\": not found" Sep 6 00:06:42.048198 kubelet[2493]: E0906 00:06:42.048149 2493 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40\": not found" containerID="839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40" Sep 6 00:06:42.048249 kubelet[2493]: I0906 00:06:42.048220 2493 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40"} err="failed to get container status \"839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40\": rpc error: code = NotFound desc = an error occurred when try to find container \"839df1cbf441f3875e14e5fbffe55f4e00621667fe279169bf5689cb0fb76e40\": not found" Sep 6 00:06:42.326437 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a2111fd4b0fafc85197561ee17e7cd0593503c29c8ffb413916ef1c68a184677-rootfs.mount: Deactivated successfully. Sep 6 00:06:42.326533 systemd[1]: var-lib-kubelet-pods-6bd01021\x2dbacd\x2d4264\x2dafa2\x2d0e297c8a13db-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djr4hc.mount: Deactivated successfully. Sep 6 00:06:42.326590 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-539bdc5c01cdc693aee7df95a8748a9e51bb5f3df6234c40348aedab0206dfe3-rootfs.mount: Deactivated successfully. Sep 6 00:06:42.326645 systemd[1]: var-lib-kubelet-pods-82e02273\x2d117b\x2d4b61\x2d8c77\x2db8cc92f40c43-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d85lfb.mount: Deactivated successfully. Sep 6 00:06:42.326698 systemd[1]: var-lib-kubelet-pods-82e02273\x2d117b\x2d4b61\x2d8c77\x2db8cc92f40c43-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 6 00:06:42.326750 systemd[1]: var-lib-kubelet-pods-82e02273\x2d117b\x2d4b61\x2d8c77\x2db8cc92f40c43-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 6 00:06:43.253672 sshd[4133]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:43.261500 systemd[1]: sshd@21-10.0.0.93:22-10.0.0.1:40756.service: Deactivated successfully. Sep 6 00:06:43.264507 systemd[1]: session-22.scope: Deactivated successfully. Sep 6 00:06:43.267262 systemd[1]: session-22.scope: Consumed 2.337s CPU time. Sep 6 00:06:43.269450 systemd-logind[1422]: Session 22 logged out. Waiting for processes to exit. Sep 6 00:06:43.281308 systemd[1]: Started sshd@22-10.0.0.93:22-10.0.0.1:41850.service - OpenSSH per-connection server daemon (10.0.0.1:41850). Sep 6 00:06:43.282899 systemd-logind[1422]: Removed session 22. Sep 6 00:06:43.319167 sshd[4295]: Accepted publickey for core from 10.0.0.1 port 41850 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:43.318885 sshd[4295]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:43.322962 systemd-logind[1422]: New session 23 of user core. Sep 6 00:06:43.334969 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 6 00:06:43.735979 kubelet[2493]: I0906 00:06:43.735907 2493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6bd01021-bacd-4264-afa2-0e297c8a13db" path="/var/lib/kubelet/pods/6bd01021-bacd-4264-afa2-0e297c8a13db/volumes" Sep 6 00:06:43.736314 kubelet[2493]: I0906 00:06:43.736295 2493 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82e02273-117b-4b61-8c77-b8cc92f40c43" path="/var/lib/kubelet/pods/82e02273-117b-4b61-8c77-b8cc92f40c43/volumes" Sep 6 00:06:44.473560 sshd[4295]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:44.484345 systemd[1]: sshd@22-10.0.0.93:22-10.0.0.1:41850.service: Deactivated successfully. Sep 6 00:06:44.491624 systemd[1]: session-23.scope: Deactivated successfully. Sep 6 00:06:44.491780 systemd[1]: session-23.scope: Consumed 1.043s CPU time. Sep 6 00:06:44.494327 systemd-logind[1422]: Session 23 logged out. Waiting for processes to exit. Sep 6 00:06:44.499444 systemd[1]: Started sshd@23-10.0.0.93:22-10.0.0.1:41866.service - OpenSSH per-connection server daemon (10.0.0.1:41866). Sep 6 00:06:44.508765 systemd-logind[1422]: Removed session 23. Sep 6 00:06:44.515926 systemd[1]: Created slice kubepods-burstable-poda65d10c4_72f2_4bdc_be3d_4be93c73e4de.slice - libcontainer container kubepods-burstable-poda65d10c4_72f2_4bdc_be3d_4be93c73e4de.slice. Sep 6 00:06:44.545536 kubelet[2493]: I0906 00:06:44.545484 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a65d10c4-72f2-4bdc-be3d-4be93c73e4de-etc-cni-netd\") pod \"cilium-tdq48\" (UID: \"a65d10c4-72f2-4bdc-be3d-4be93c73e4de\") " pod="kube-system/cilium-tdq48" Sep 6 00:06:44.545698 kubelet[2493]: I0906 00:06:44.545546 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a65d10c4-72f2-4bdc-be3d-4be93c73e4de-cilium-ipsec-secrets\") pod \"cilium-tdq48\" (UID: \"a65d10c4-72f2-4bdc-be3d-4be93c73e4de\") " pod="kube-system/cilium-tdq48" Sep 6 00:06:44.545698 kubelet[2493]: I0906 00:06:44.545567 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a65d10c4-72f2-4bdc-be3d-4be93c73e4de-hostproc\") pod \"cilium-tdq48\" (UID: \"a65d10c4-72f2-4bdc-be3d-4be93c73e4de\") " pod="kube-system/cilium-tdq48" Sep 6 00:06:44.545698 kubelet[2493]: I0906 00:06:44.545583 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a65d10c4-72f2-4bdc-be3d-4be93c73e4de-cilium-cgroup\") pod \"cilium-tdq48\" (UID: \"a65d10c4-72f2-4bdc-be3d-4be93c73e4de\") " pod="kube-system/cilium-tdq48" Sep 6 00:06:44.545698 kubelet[2493]: I0906 00:06:44.545624 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a65d10c4-72f2-4bdc-be3d-4be93c73e4de-lib-modules\") pod \"cilium-tdq48\" (UID: \"a65d10c4-72f2-4bdc-be3d-4be93c73e4de\") " pod="kube-system/cilium-tdq48" Sep 6 00:06:44.545698 kubelet[2493]: I0906 00:06:44.545661 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a65d10c4-72f2-4bdc-be3d-4be93c73e4de-clustermesh-secrets\") pod \"cilium-tdq48\" (UID: \"a65d10c4-72f2-4bdc-be3d-4be93c73e4de\") " pod="kube-system/cilium-tdq48" Sep 6 00:06:44.545698 kubelet[2493]: I0906 00:06:44.545698 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a65d10c4-72f2-4bdc-be3d-4be93c73e4de-cilium-config-path\") pod \"cilium-tdq48\" (UID: \"a65d10c4-72f2-4bdc-be3d-4be93c73e4de\") " pod="kube-system/cilium-tdq48" Sep 6 00:06:44.545845 kubelet[2493]: I0906 00:06:44.545726 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a65d10c4-72f2-4bdc-be3d-4be93c73e4de-host-proc-sys-net\") pod \"cilium-tdq48\" (UID: \"a65d10c4-72f2-4bdc-be3d-4be93c73e4de\") " pod="kube-system/cilium-tdq48" Sep 6 00:06:44.545845 kubelet[2493]: I0906 00:06:44.545764 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a65d10c4-72f2-4bdc-be3d-4be93c73e4de-bpf-maps\") pod \"cilium-tdq48\" (UID: \"a65d10c4-72f2-4bdc-be3d-4be93c73e4de\") " pod="kube-system/cilium-tdq48" Sep 6 00:06:44.545845 kubelet[2493]: I0906 00:06:44.545789 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a65d10c4-72f2-4bdc-be3d-4be93c73e4de-hubble-tls\") pod \"cilium-tdq48\" (UID: \"a65d10c4-72f2-4bdc-be3d-4be93c73e4de\") " pod="kube-system/cilium-tdq48" Sep 6 00:06:44.545845 kubelet[2493]: I0906 00:06:44.545822 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a65d10c4-72f2-4bdc-be3d-4be93c73e4de-host-proc-sys-kernel\") pod \"cilium-tdq48\" (UID: \"a65d10c4-72f2-4bdc-be3d-4be93c73e4de\") " pod="kube-system/cilium-tdq48" Sep 6 00:06:44.545950 kubelet[2493]: I0906 00:06:44.545853 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a65d10c4-72f2-4bdc-be3d-4be93c73e4de-cni-path\") pod \"cilium-tdq48\" (UID: \"a65d10c4-72f2-4bdc-be3d-4be93c73e4de\") " pod="kube-system/cilium-tdq48" Sep 6 00:06:44.545950 kubelet[2493]: I0906 00:06:44.545871 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a65d10c4-72f2-4bdc-be3d-4be93c73e4de-xtables-lock\") pod \"cilium-tdq48\" (UID: \"a65d10c4-72f2-4bdc-be3d-4be93c73e4de\") " pod="kube-system/cilium-tdq48" Sep 6 00:06:44.545950 kubelet[2493]: I0906 00:06:44.545885 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z788\" (UniqueName: \"kubernetes.io/projected/a65d10c4-72f2-4bdc-be3d-4be93c73e4de-kube-api-access-2z788\") pod \"cilium-tdq48\" (UID: \"a65d10c4-72f2-4bdc-be3d-4be93c73e4de\") " pod="kube-system/cilium-tdq48" Sep 6 00:06:44.545950 kubelet[2493]: I0906 00:06:44.545902 2493 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a65d10c4-72f2-4bdc-be3d-4be93c73e4de-cilium-run\") pod \"cilium-tdq48\" (UID: \"a65d10c4-72f2-4bdc-be3d-4be93c73e4de\") " pod="kube-system/cilium-tdq48" Sep 6 00:06:44.546767 sshd[4308]: Accepted publickey for core from 10.0.0.1 port 41866 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:44.548091 sshd[4308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:44.552711 systemd-logind[1422]: New session 24 of user core. Sep 6 00:06:44.565105 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 6 00:06:44.616026 sshd[4308]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:44.626558 systemd[1]: sshd@23-10.0.0.93:22-10.0.0.1:41866.service: Deactivated successfully. Sep 6 00:06:44.628469 systemd[1]: session-24.scope: Deactivated successfully. Sep 6 00:06:44.631538 systemd-logind[1422]: Session 24 logged out. Waiting for processes to exit. Sep 6 00:06:44.644260 systemd[1]: Started sshd@24-10.0.0.93:22-10.0.0.1:41878.service - OpenSSH per-connection server daemon (10.0.0.1:41878). Sep 6 00:06:44.645762 systemd-logind[1422]: Removed session 24. Sep 6 00:06:44.685540 sshd[4316]: Accepted publickey for core from 10.0.0.1 port 41878 ssh2: RSA SHA256:E7E9sF+nY9ImF9J6oXtqDQFV+WdmWbsw1aLuJ7lYdh8 Sep 6 00:06:44.686866 sshd[4316]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 6 00:06:44.691051 systemd-logind[1422]: New session 25 of user core. Sep 6 00:06:44.698116 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 6 00:06:44.783484 kubelet[2493]: E0906 00:06:44.783217 2493 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 6 00:06:44.824449 kubelet[2493]: E0906 00:06:44.824025 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:44.824588 containerd[1433]: time="2025-09-06T00:06:44.824532401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tdq48,Uid:a65d10c4-72f2-4bdc-be3d-4be93c73e4de,Namespace:kube-system,Attempt:0,}" Sep 6 00:06:44.859630 containerd[1433]: time="2025-09-06T00:06:44.859510457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 6 00:06:44.859736 containerd[1433]: time="2025-09-06T00:06:44.859678577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 6 00:06:44.859760 containerd[1433]: time="2025-09-06T00:06:44.859713856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:44.860373 containerd[1433]: time="2025-09-06T00:06:44.860302534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 6 00:06:44.889177 systemd[1]: Started cri-containerd-464ac9607631fc689bb24715258f97aeed311c4df7c614b93222d87a85a03050.scope - libcontainer container 464ac9607631fc689bb24715258f97aeed311c4df7c614b93222d87a85a03050. Sep 6 00:06:44.908907 containerd[1433]: time="2025-09-06T00:06:44.908859374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tdq48,Uid:a65d10c4-72f2-4bdc-be3d-4be93c73e4de,Namespace:kube-system,Attempt:0,} returns sandbox id \"464ac9607631fc689bb24715258f97aeed311c4df7c614b93222d87a85a03050\"" Sep 6 00:06:44.909511 kubelet[2493]: E0906 00:06:44.909484 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:44.925022 containerd[1433]: time="2025-09-06T00:06:44.924975188Z" level=info msg="CreateContainer within sandbox \"464ac9607631fc689bb24715258f97aeed311c4df7c614b93222d87a85a03050\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 6 00:06:44.938241 containerd[1433]: time="2025-09-06T00:06:44.938191254Z" level=info msg="CreateContainer within sandbox \"464ac9607631fc689bb24715258f97aeed311c4df7c614b93222d87a85a03050\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c19c9ce4684a5164581178c947e83e85f1904efebb4ac7d7f14f442461e3ad98\"" Sep 6 00:06:44.938834 containerd[1433]: time="2025-09-06T00:06:44.938786891Z" level=info msg="StartContainer for \"c19c9ce4684a5164581178c947e83e85f1904efebb4ac7d7f14f442461e3ad98\"" Sep 6 00:06:44.972126 systemd[1]: Started cri-containerd-c19c9ce4684a5164581178c947e83e85f1904efebb4ac7d7f14f442461e3ad98.scope - libcontainer container c19c9ce4684a5164581178c947e83e85f1904efebb4ac7d7f14f442461e3ad98. Sep 6 00:06:44.994379 containerd[1433]: time="2025-09-06T00:06:44.994321263Z" level=info msg="StartContainer for \"c19c9ce4684a5164581178c947e83e85f1904efebb4ac7d7f14f442461e3ad98\" returns successfully" Sep 6 00:06:45.002447 systemd[1]: cri-containerd-c19c9ce4684a5164581178c947e83e85f1904efebb4ac7d7f14f442461e3ad98.scope: Deactivated successfully. Sep 6 00:06:45.018003 kubelet[2493]: E0906 00:06:45.017452 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:45.040563 containerd[1433]: time="2025-09-06T00:06:45.040282001Z" level=info msg="shim disconnected" id=c19c9ce4684a5164581178c947e83e85f1904efebb4ac7d7f14f442461e3ad98 namespace=k8s.io Sep 6 00:06:45.040563 containerd[1433]: time="2025-09-06T00:06:45.040471120Z" level=warning msg="cleaning up after shim disconnected" id=c19c9ce4684a5164581178c947e83e85f1904efebb4ac7d7f14f442461e3ad98 namespace=k8s.io Sep 6 00:06:45.040563 containerd[1433]: time="2025-09-06T00:06:45.040481520Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:45.734273 kubelet[2493]: E0906 00:06:45.734225 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:46.021177 kubelet[2493]: E0906 00:06:46.021073 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:46.031819 containerd[1433]: time="2025-09-06T00:06:46.031770924Z" level=info msg="CreateContainer within sandbox \"464ac9607631fc689bb24715258f97aeed311c4df7c614b93222d87a85a03050\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 6 00:06:46.044399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2368020163.mount: Deactivated successfully. Sep 6 00:06:46.045621 containerd[1433]: time="2025-09-06T00:06:46.045574100Z" level=info msg="CreateContainer within sandbox \"464ac9607631fc689bb24715258f97aeed311c4df7c614b93222d87a85a03050\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d92d3d52f7332642886210ec07826f8a9b1b0877e757a57bdc1613b61084f8df\"" Sep 6 00:06:46.046396 containerd[1433]: time="2025-09-06T00:06:46.046367778Z" level=info msg="StartContainer for \"d92d3d52f7332642886210ec07826f8a9b1b0877e757a57bdc1613b61084f8df\"" Sep 6 00:06:46.089114 systemd[1]: Started cri-containerd-d92d3d52f7332642886210ec07826f8a9b1b0877e757a57bdc1613b61084f8df.scope - libcontainer container d92d3d52f7332642886210ec07826f8a9b1b0877e757a57bdc1613b61084f8df. Sep 6 00:06:46.111698 containerd[1433]: time="2025-09-06T00:06:46.111641861Z" level=info msg="StartContainer for \"d92d3d52f7332642886210ec07826f8a9b1b0877e757a57bdc1613b61084f8df\" returns successfully" Sep 6 00:06:46.117654 systemd[1]: cri-containerd-d92d3d52f7332642886210ec07826f8a9b1b0877e757a57bdc1613b61084f8df.scope: Deactivated successfully. Sep 6 00:06:46.143574 containerd[1433]: time="2025-09-06T00:06:46.143513764Z" level=info msg="shim disconnected" id=d92d3d52f7332642886210ec07826f8a9b1b0877e757a57bdc1613b61084f8df namespace=k8s.io Sep 6 00:06:46.143574 containerd[1433]: time="2025-09-06T00:06:46.143569563Z" level=warning msg="cleaning up after shim disconnected" id=d92d3d52f7332642886210ec07826f8a9b1b0877e757a57bdc1613b61084f8df namespace=k8s.io Sep 6 00:06:46.143574 containerd[1433]: time="2025-09-06T00:06:46.143579883Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:46.153502 containerd[1433]: time="2025-09-06T00:06:46.153449066Z" level=warning msg="cleanup warnings time=\"2025-09-06T00:06:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 6 00:06:46.652737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d92d3d52f7332642886210ec07826f8a9b1b0877e757a57bdc1613b61084f8df-rootfs.mount: Deactivated successfully. Sep 6 00:06:46.734464 kubelet[2493]: E0906 00:06:46.734333 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:47.023691 kubelet[2493]: E0906 00:06:47.023663 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:47.031391 containerd[1433]: time="2025-09-06T00:06:47.031212560Z" level=info msg="CreateContainer within sandbox \"464ac9607631fc689bb24715258f97aeed311c4df7c614b93222d87a85a03050\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 6 00:06:47.044467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2490472389.mount: Deactivated successfully. Sep 6 00:06:47.049081 containerd[1433]: time="2025-09-06T00:06:47.049013028Z" level=info msg="CreateContainer within sandbox \"464ac9607631fc689bb24715258f97aeed311c4df7c614b93222d87a85a03050\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"047f94efed0181af384099ca72bf2971478502c05def2c11309b02738453ef86\"" Sep 6 00:06:47.050713 containerd[1433]: time="2025-09-06T00:06:47.050566907Z" level=info msg="StartContainer for \"047f94efed0181af384099ca72bf2971478502c05def2c11309b02738453ef86\"" Sep 6 00:06:47.085204 systemd[1]: Started cri-containerd-047f94efed0181af384099ca72bf2971478502c05def2c11309b02738453ef86.scope - libcontainer container 047f94efed0181af384099ca72bf2971478502c05def2c11309b02738453ef86. Sep 6 00:06:47.111405 systemd[1]: cri-containerd-047f94efed0181af384099ca72bf2971478502c05def2c11309b02738453ef86.scope: Deactivated successfully. Sep 6 00:06:47.120690 containerd[1433]: time="2025-09-06T00:06:47.120646898Z" level=info msg="StartContainer for \"047f94efed0181af384099ca72bf2971478502c05def2c11309b02738453ef86\" returns successfully" Sep 6 00:06:47.148066 containerd[1433]: time="2025-09-06T00:06:47.147983799Z" level=info msg="shim disconnected" id=047f94efed0181af384099ca72bf2971478502c05def2c11309b02738453ef86 namespace=k8s.io Sep 6 00:06:47.148066 containerd[1433]: time="2025-09-06T00:06:47.148038879Z" level=warning msg="cleaning up after shim disconnected" id=047f94efed0181af384099ca72bf2971478502c05def2c11309b02738453ef86 namespace=k8s.io Sep 6 00:06:47.148066 containerd[1433]: time="2025-09-06T00:06:47.148047599Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:47.652873 systemd[1]: run-containerd-runc-k8s.io-047f94efed0181af384099ca72bf2971478502c05def2c11309b02738453ef86-runc.YGspW4.mount: Deactivated successfully. Sep 6 00:06:47.652993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-047f94efed0181af384099ca72bf2971478502c05def2c11309b02738453ef86-rootfs.mount: Deactivated successfully. Sep 6 00:06:48.031519 kubelet[2493]: E0906 00:06:48.031363 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:48.040912 containerd[1433]: time="2025-09-06T00:06:48.040867499Z" level=info msg="CreateContainer within sandbox \"464ac9607631fc689bb24715258f97aeed311c4df7c614b93222d87a85a03050\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 6 00:06:48.069969 containerd[1433]: time="2025-09-06T00:06:48.069311590Z" level=info msg="CreateContainer within sandbox \"464ac9607631fc689bb24715258f97aeed311c4df7c614b93222d87a85a03050\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"645646acc1744ad8f1772655f2abb53c10a6ec440cf513e858c1bf6ff55c7662\"" Sep 6 00:06:48.071704 containerd[1433]: time="2025-09-06T00:06:48.071398831Z" level=info msg="StartContainer for \"645646acc1744ad8f1772655f2abb53c10a6ec440cf513e858c1bf6ff55c7662\"" Sep 6 00:06:48.106097 systemd[1]: Started cri-containerd-645646acc1744ad8f1772655f2abb53c10a6ec440cf513e858c1bf6ff55c7662.scope - libcontainer container 645646acc1744ad8f1772655f2abb53c10a6ec440cf513e858c1bf6ff55c7662. Sep 6 00:06:48.128576 systemd[1]: cri-containerd-645646acc1744ad8f1772655f2abb53c10a6ec440cf513e858c1bf6ff55c7662.scope: Deactivated successfully. Sep 6 00:06:48.129276 containerd[1433]: time="2025-09-06T00:06:48.129170852Z" level=info msg="StartContainer for \"645646acc1744ad8f1772655f2abb53c10a6ec440cf513e858c1bf6ff55c7662\" returns successfully" Sep 6 00:06:48.148524 containerd[1433]: time="2025-09-06T00:06:48.148470099Z" level=info msg="shim disconnected" id=645646acc1744ad8f1772655f2abb53c10a6ec440cf513e858c1bf6ff55c7662 namespace=k8s.io Sep 6 00:06:48.148861 containerd[1433]: time="2025-09-06T00:06:48.148689739Z" level=warning msg="cleaning up after shim disconnected" id=645646acc1744ad8f1772655f2abb53c10a6ec440cf513e858c1bf6ff55c7662 namespace=k8s.io Sep 6 00:06:48.148861 containerd[1433]: time="2025-09-06T00:06:48.148706459Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 6 00:06:48.652990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-645646acc1744ad8f1772655f2abb53c10a6ec440cf513e858c1bf6ff55c7662-rootfs.mount: Deactivated successfully. Sep 6 00:06:49.053042 kubelet[2493]: E0906 00:06:49.053003 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:49.060963 containerd[1433]: time="2025-09-06T00:06:49.059970978Z" level=info msg="CreateContainer within sandbox \"464ac9607631fc689bb24715258f97aeed311c4df7c614b93222d87a85a03050\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 6 00:06:49.079188 containerd[1433]: time="2025-09-06T00:06:49.079134885Z" level=info msg="CreateContainer within sandbox \"464ac9607631fc689bb24715258f97aeed311c4df7c614b93222d87a85a03050\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b8ce75e3611b967de3c8d3a2ae210a05cbbdced7e4a687fab9c39bcb2a486010\"" Sep 6 00:06:49.079589 containerd[1433]: time="2025-09-06T00:06:49.079569966Z" level=info msg="StartContainer for \"b8ce75e3611b967de3c8d3a2ae210a05cbbdced7e4a687fab9c39bcb2a486010\"" Sep 6 00:06:49.110119 systemd[1]: Started cri-containerd-b8ce75e3611b967de3c8d3a2ae210a05cbbdced7e4a687fab9c39bcb2a486010.scope - libcontainer container b8ce75e3611b967de3c8d3a2ae210a05cbbdced7e4a687fab9c39bcb2a486010. Sep 6 00:06:49.139696 containerd[1433]: time="2025-09-06T00:06:49.139648090Z" level=info msg="StartContainer for \"b8ce75e3611b967de3c8d3a2ae210a05cbbdced7e4a687fab9c39bcb2a486010\" returns successfully" Sep 6 00:06:49.409061 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 6 00:06:50.057011 kubelet[2493]: E0906 00:06:50.056975 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:50.073020 kubelet[2493]: I0906 00:06:50.072965 2493 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tdq48" podStartSLOduration=6.072950634 podStartE2EDuration="6.072950634s" podCreationTimestamp="2025-09-06 00:06:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-06 00:06:50.072819593 +0000 UTC m=+80.435233227" watchObservedRunningTime="2025-09-06 00:06:50.072950634 +0000 UTC m=+80.435364268" Sep 6 00:06:51.058716 kubelet[2493]: E0906 00:06:51.058669 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:52.259054 systemd-networkd[1383]: lxc_health: Link UP Sep 6 00:06:52.273651 systemd-networkd[1383]: lxc_health: Gained carrier Sep 6 00:06:52.828158 kubelet[2493]: E0906 00:06:52.828077 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:53.063556 kubelet[2493]: E0906 00:06:53.063521 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 6 00:06:53.621094 systemd-networkd[1383]: lxc_health: Gained IPv6LL Sep 6 00:06:57.432554 sshd[4316]: pam_unix(sshd:session): session closed for user core Sep 6 00:06:57.435754 systemd[1]: sshd@24-10.0.0.93:22-10.0.0.1:41878.service: Deactivated successfully. Sep 6 00:06:57.437974 systemd[1]: session-25.scope: Deactivated successfully. Sep 6 00:06:57.438714 systemd-logind[1422]: Session 25 logged out. Waiting for processes to exit. Sep 6 00:06:57.439896 systemd-logind[1422]: Removed session 25. Sep 6 00:06:57.733570 kubelet[2493]: E0906 00:06:57.733537 2493 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"