Sep 13 00:16:57.839445 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 13 00:16:57.839465 kernel: Linux version 6.6.106-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Sep 12 22:36:20 -00 2025 Sep 13 00:16:57.839475 kernel: KASLR enabled Sep 13 00:16:57.839481 kernel: efi: EFI v2.7 by EDK II Sep 13 00:16:57.839487 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Sep 13 00:16:57.839492 kernel: random: crng init done Sep 13 00:16:57.839510 kernel: ACPI: Early table checksum verification disabled Sep 13 00:16:57.839517 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Sep 13 00:16:57.839523 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 13 00:16:57.839531 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:16:57.839538 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:16:57.839544 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:16:57.839550 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:16:57.839556 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:16:57.839563 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:16:57.839571 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:16:57.839577 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:16:57.839584 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 13 00:16:57.839590 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 13 00:16:57.839596 kernel: NUMA: Failed to initialise from firmware Sep 13 00:16:57.839603 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:16:57.839609 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Sep 13 00:16:57.839616 kernel: Zone ranges: Sep 13 00:16:57.839622 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:16:57.839628 kernel: DMA32 empty Sep 13 00:16:57.839636 kernel: Normal empty Sep 13 00:16:57.839642 kernel: Movable zone start for each node Sep 13 00:16:57.839649 kernel: Early memory node ranges Sep 13 00:16:57.839655 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Sep 13 00:16:57.839661 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Sep 13 00:16:57.839668 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Sep 13 00:16:57.839674 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 13 00:16:57.839680 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 13 00:16:57.839687 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 13 00:16:57.839693 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 13 00:16:57.839699 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 13 00:16:57.839706 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 13 00:16:57.839713 kernel: psci: probing for conduit method from ACPI. Sep 13 00:16:57.839720 kernel: psci: PSCIv1.1 detected in firmware. Sep 13 00:16:57.839726 kernel: psci: Using standard PSCI v0.2 function IDs Sep 13 00:16:57.839735 kernel: psci: Trusted OS migration not required Sep 13 00:16:57.839745 kernel: psci: SMC Calling Convention v1.1 Sep 13 00:16:57.839752 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 13 00:16:57.839760 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 Sep 13 00:16:57.839767 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 Sep 13 00:16:57.839774 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 13 00:16:57.839780 kernel: Detected PIPT I-cache on CPU0 Sep 13 00:16:57.839787 kernel: CPU features: detected: GIC system register CPU interface Sep 13 00:16:57.839794 kernel: CPU features: detected: Hardware dirty bit management Sep 13 00:16:57.839801 kernel: CPU features: detected: Spectre-v4 Sep 13 00:16:57.839807 kernel: CPU features: detected: Spectre-BHB Sep 13 00:16:57.839814 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 13 00:16:57.839821 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 13 00:16:57.839829 kernel: CPU features: detected: ARM erratum 1418040 Sep 13 00:16:57.839836 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 13 00:16:57.839842 kernel: alternatives: applying boot alternatives Sep 13 00:16:57.839851 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:16:57.839858 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 13 00:16:57.839865 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 13 00:16:57.839872 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 13 00:16:57.839879 kernel: Fallback order for Node 0: 0 Sep 13 00:16:57.839885 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Sep 13 00:16:57.839892 kernel: Policy zone: DMA Sep 13 00:16:57.839899 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 13 00:16:57.839906 kernel: software IO TLB: area num 4. Sep 13 00:16:57.839913 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Sep 13 00:16:57.839920 kernel: Memory: 2386336K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39488K init, 897K bss, 185952K reserved, 0K cma-reserved) Sep 13 00:16:57.839927 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 13 00:16:57.839934 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 13 00:16:57.839941 kernel: rcu: RCU event tracing is enabled. Sep 13 00:16:57.839948 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 13 00:16:57.839955 kernel: Trampoline variant of Tasks RCU enabled. Sep 13 00:16:57.839962 kernel: Tracing variant of Tasks RCU enabled. Sep 13 00:16:57.839968 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 13 00:16:57.839975 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 13 00:16:57.839983 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 13 00:16:57.839990 kernel: GICv3: 256 SPIs implemented Sep 13 00:16:57.839997 kernel: GICv3: 0 Extended SPIs implemented Sep 13 00:16:57.840003 kernel: Root IRQ handler: gic_handle_irq Sep 13 00:16:57.840010 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 13 00:16:57.840017 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 13 00:16:57.840024 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 13 00:16:57.840030 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Sep 13 00:16:57.840037 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Sep 13 00:16:57.840044 kernel: GICv3: using LPI property table @0x00000000400f0000 Sep 13 00:16:57.840051 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Sep 13 00:16:57.840057 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 13 00:16:57.840065 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:16:57.840072 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 13 00:16:57.840079 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 13 00:16:57.840086 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 13 00:16:57.840093 kernel: arm-pv: using stolen time PV Sep 13 00:16:57.840100 kernel: Console: colour dummy device 80x25 Sep 13 00:16:57.840107 kernel: ACPI: Core revision 20230628 Sep 13 00:16:57.840114 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 13 00:16:57.840121 kernel: pid_max: default: 32768 minimum: 301 Sep 13 00:16:57.840128 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Sep 13 00:16:57.840142 kernel: landlock: Up and running. Sep 13 00:16:57.840150 kernel: SELinux: Initializing. Sep 13 00:16:57.840157 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:16:57.840164 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 13 00:16:57.840171 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:16:57.840178 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 13 00:16:57.840185 kernel: rcu: Hierarchical SRCU implementation. Sep 13 00:16:57.840192 kernel: rcu: Max phase no-delay instances is 400. Sep 13 00:16:57.840214 kernel: Platform MSI: ITS@0x8080000 domain created Sep 13 00:16:57.840223 kernel: PCI/MSI: ITS@0x8080000 domain created Sep 13 00:16:57.840230 kernel: Remapping and enabling EFI services. Sep 13 00:16:57.840237 kernel: smp: Bringing up secondary CPUs ... Sep 13 00:16:57.840243 kernel: Detected PIPT I-cache on CPU1 Sep 13 00:16:57.840251 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 13 00:16:57.840258 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Sep 13 00:16:57.840265 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:16:57.840272 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 13 00:16:57.840279 kernel: Detected PIPT I-cache on CPU2 Sep 13 00:16:57.840286 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 13 00:16:57.840294 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Sep 13 00:16:57.840301 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:16:57.840312 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 13 00:16:57.840321 kernel: Detected PIPT I-cache on CPU3 Sep 13 00:16:57.840328 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 13 00:16:57.840335 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Sep 13 00:16:57.840343 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 13 00:16:57.840350 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 13 00:16:57.840357 kernel: smp: Brought up 1 node, 4 CPUs Sep 13 00:16:57.840365 kernel: SMP: Total of 4 processors activated. Sep 13 00:16:57.840373 kernel: CPU features: detected: 32-bit EL0 Support Sep 13 00:16:57.840380 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 13 00:16:57.840387 kernel: CPU features: detected: Common not Private translations Sep 13 00:16:57.840395 kernel: CPU features: detected: CRC32 instructions Sep 13 00:16:57.840402 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 13 00:16:57.840409 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 13 00:16:57.840416 kernel: CPU features: detected: LSE atomic instructions Sep 13 00:16:57.840425 kernel: CPU features: detected: Privileged Access Never Sep 13 00:16:57.840432 kernel: CPU features: detected: RAS Extension Support Sep 13 00:16:57.840439 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 13 00:16:57.840446 kernel: CPU: All CPU(s) started at EL1 Sep 13 00:16:57.840454 kernel: alternatives: applying system-wide alternatives Sep 13 00:16:57.840461 kernel: devtmpfs: initialized Sep 13 00:16:57.840468 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 13 00:16:57.840476 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 13 00:16:57.840483 kernel: pinctrl core: initialized pinctrl subsystem Sep 13 00:16:57.840491 kernel: SMBIOS 3.0.0 present. Sep 13 00:16:57.840504 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Sep 13 00:16:57.840512 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 13 00:16:57.840519 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 13 00:16:57.840526 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 13 00:16:57.840534 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 13 00:16:57.840541 kernel: audit: initializing netlink subsys (disabled) Sep 13 00:16:57.840548 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Sep 13 00:16:57.840556 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 13 00:16:57.840566 kernel: cpuidle: using governor menu Sep 13 00:16:57.840573 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 13 00:16:57.840581 kernel: ASID allocator initialised with 32768 entries Sep 13 00:16:57.840588 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 13 00:16:57.840595 kernel: Serial: AMBA PL011 UART driver Sep 13 00:16:57.840603 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 13 00:16:57.840610 kernel: Modules: 0 pages in range for non-PLT usage Sep 13 00:16:57.840617 kernel: Modules: 508992 pages in range for PLT usage Sep 13 00:16:57.840624 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 13 00:16:57.840633 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 13 00:16:57.840640 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 13 00:16:57.840648 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 13 00:16:57.840655 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 13 00:16:57.840662 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 13 00:16:57.840669 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 13 00:16:57.840677 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 13 00:16:57.840684 kernel: ACPI: Added _OSI(Module Device) Sep 13 00:16:57.840691 kernel: ACPI: Added _OSI(Processor Device) Sep 13 00:16:57.840699 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 13 00:16:57.840707 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 13 00:16:57.840714 kernel: ACPI: Interpreter enabled Sep 13 00:16:57.840721 kernel: ACPI: Using GIC for interrupt routing Sep 13 00:16:57.840728 kernel: ACPI: MCFG table detected, 1 entries Sep 13 00:16:57.840736 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 13 00:16:57.840743 kernel: printk: console [ttyAMA0] enabled Sep 13 00:16:57.840750 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 13 00:16:57.840874 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 13 00:16:57.840948 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 13 00:16:57.841014 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 13 00:16:57.841077 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 13 00:16:57.841150 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 13 00:16:57.841161 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 13 00:16:57.841168 kernel: PCI host bridge to bus 0000:00 Sep 13 00:16:57.841243 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 13 00:16:57.841306 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 13 00:16:57.841364 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 13 00:16:57.841421 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 13 00:16:57.841578 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Sep 13 00:16:57.841660 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Sep 13 00:16:57.841726 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Sep 13 00:16:57.841794 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Sep 13 00:16:57.841859 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:16:57.841922 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Sep 13 00:16:57.841986 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Sep 13 00:16:57.842050 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Sep 13 00:16:57.842109 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 13 00:16:57.842177 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 13 00:16:57.842237 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 13 00:16:57.842247 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 13 00:16:57.842254 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 13 00:16:57.842262 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 13 00:16:57.842269 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 13 00:16:57.842277 kernel: iommu: Default domain type: Translated Sep 13 00:16:57.842284 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 13 00:16:57.842292 kernel: efivars: Registered efivars operations Sep 13 00:16:57.842299 kernel: vgaarb: loaded Sep 13 00:16:57.842308 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 13 00:16:57.842315 kernel: VFS: Disk quotas dquot_6.6.0 Sep 13 00:16:57.842323 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 13 00:16:57.842330 kernel: pnp: PnP ACPI init Sep 13 00:16:57.842403 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 13 00:16:57.842414 kernel: pnp: PnP ACPI: found 1 devices Sep 13 00:16:57.842421 kernel: NET: Registered PF_INET protocol family Sep 13 00:16:57.842428 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 13 00:16:57.842438 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 13 00:16:57.842445 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 13 00:16:57.842452 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 13 00:16:57.842460 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 13 00:16:57.842467 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 13 00:16:57.842474 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:16:57.842482 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 13 00:16:57.842489 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 13 00:16:57.842504 kernel: PCI: CLS 0 bytes, default 64 Sep 13 00:16:57.842513 kernel: kvm [1]: HYP mode not available Sep 13 00:16:57.842521 kernel: Initialise system trusted keyrings Sep 13 00:16:57.842528 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 13 00:16:57.842535 kernel: Key type asymmetric registered Sep 13 00:16:57.842542 kernel: Asymmetric key parser 'x509' registered Sep 13 00:16:57.842549 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Sep 13 00:16:57.842557 kernel: io scheduler mq-deadline registered Sep 13 00:16:57.842564 kernel: io scheduler kyber registered Sep 13 00:16:57.842571 kernel: io scheduler bfq registered Sep 13 00:16:57.842580 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 13 00:16:57.842587 kernel: ACPI: button: Power Button [PWRB] Sep 13 00:16:57.842595 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 13 00:16:57.842680 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 13 00:16:57.842690 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 13 00:16:57.842697 kernel: thunder_xcv, ver 1.0 Sep 13 00:16:57.842704 kernel: thunder_bgx, ver 1.0 Sep 13 00:16:57.842711 kernel: nicpf, ver 1.0 Sep 13 00:16:57.842719 kernel: nicvf, ver 1.0 Sep 13 00:16:57.842792 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 13 00:16:57.842853 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-13T00:16:57 UTC (1757722617) Sep 13 00:16:57.842863 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 13 00:16:57.842870 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Sep 13 00:16:57.842877 kernel: watchdog: Delayed init of the lockup detector failed: -19 Sep 13 00:16:57.842885 kernel: watchdog: Hard watchdog permanently disabled Sep 13 00:16:57.842892 kernel: NET: Registered PF_INET6 protocol family Sep 13 00:16:57.842899 kernel: Segment Routing with IPv6 Sep 13 00:16:57.842908 kernel: In-situ OAM (IOAM) with IPv6 Sep 13 00:16:57.842915 kernel: NET: Registered PF_PACKET protocol family Sep 13 00:16:57.842923 kernel: Key type dns_resolver registered Sep 13 00:16:57.842930 kernel: registered taskstats version 1 Sep 13 00:16:57.842937 kernel: Loading compiled-in X.509 certificates Sep 13 00:16:57.842945 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.106-flatcar: 036ad4721a31543be5c000f2896b40d1e5515c6e' Sep 13 00:16:57.842952 kernel: Key type .fscrypt registered Sep 13 00:16:57.842959 kernel: Key type fscrypt-provisioning registered Sep 13 00:16:57.842966 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 13 00:16:57.842975 kernel: ima: Allocated hash algorithm: sha1 Sep 13 00:16:57.842982 kernel: ima: No architecture policies found Sep 13 00:16:57.842989 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 13 00:16:57.842997 kernel: clk: Disabling unused clocks Sep 13 00:16:57.843004 kernel: Freeing unused kernel memory: 39488K Sep 13 00:16:57.843011 kernel: Run /init as init process Sep 13 00:16:57.843018 kernel: with arguments: Sep 13 00:16:57.843025 kernel: /init Sep 13 00:16:57.843032 kernel: with environment: Sep 13 00:16:57.843041 kernel: HOME=/ Sep 13 00:16:57.843048 kernel: TERM=linux Sep 13 00:16:57.843055 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 13 00:16:57.843064 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:16:57.843073 systemd[1]: Detected virtualization kvm. Sep 13 00:16:57.843081 systemd[1]: Detected architecture arm64. Sep 13 00:16:57.843088 systemd[1]: Running in initrd. Sep 13 00:16:57.843096 systemd[1]: No hostname configured, using default hostname. Sep 13 00:16:57.843105 systemd[1]: Hostname set to . Sep 13 00:16:57.843113 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:16:57.843121 systemd[1]: Queued start job for default target initrd.target. Sep 13 00:16:57.843129 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:16:57.843145 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:16:57.843155 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 13 00:16:57.843162 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:16:57.843172 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 13 00:16:57.843180 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 13 00:16:57.843189 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 13 00:16:57.843197 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 13 00:16:57.843205 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:16:57.843213 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:16:57.843220 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:16:57.843230 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:16:57.843237 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:16:57.843245 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:16:57.843253 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:16:57.843261 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:16:57.843269 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:16:57.843277 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:16:57.843284 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:16:57.843292 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:16:57.843301 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:16:57.843309 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:16:57.843317 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 13 00:16:57.843325 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:16:57.843333 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 13 00:16:57.843346 systemd[1]: Starting systemd-fsck-usr.service... Sep 13 00:16:57.843353 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:16:57.843361 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:16:57.843370 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:16:57.843378 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 13 00:16:57.843386 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:16:57.843394 systemd[1]: Finished systemd-fsck-usr.service. Sep 13 00:16:57.843416 systemd-journald[236]: Collecting audit messages is disabled. Sep 13 00:16:57.843436 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:16:57.843445 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:16:57.843453 systemd-journald[236]: Journal started Sep 13 00:16:57.843473 systemd-journald[236]: Runtime Journal (/run/log/journal/99748fb3e9834c8b91d39f178b6015e6) is 5.9M, max 47.3M, 41.4M free. Sep 13 00:16:57.836039 systemd-modules-load[237]: Inserted module 'overlay' Sep 13 00:16:57.845118 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:16:57.847513 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:16:57.847537 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 13 00:16:57.850251 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:16:57.852983 kernel: Bridge firewalling registered Sep 13 00:16:57.851911 systemd-modules-load[237]: Inserted module 'br_netfilter' Sep 13 00:16:57.852981 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:16:57.856719 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:16:57.858367 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:16:57.860204 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:16:57.865404 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:16:57.867671 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 13 00:16:57.868709 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:16:57.875551 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:16:57.877239 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:16:57.879782 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:16:57.883382 dracut-cmdline[270]: dracut-dracut-053 Sep 13 00:16:57.885736 dracut-cmdline[270]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e1b46f3c9e154636c32f6cde6e746a00a6b37ca7432cb4e16d172c05f584a8c9 Sep 13 00:16:57.909484 systemd-resolved[281]: Positive Trust Anchors: Sep 13 00:16:57.909519 systemd-resolved[281]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:16:57.909552 systemd-resolved[281]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:16:57.914342 systemd-resolved[281]: Defaulting to hostname 'linux'. Sep 13 00:16:57.915399 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:16:57.919192 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:16:57.949529 kernel: SCSI subsystem initialized Sep 13 00:16:57.954509 kernel: Loading iSCSI transport class v2.0-870. Sep 13 00:16:57.961513 kernel: iscsi: registered transport (tcp) Sep 13 00:16:57.974520 kernel: iscsi: registered transport (qla4xxx) Sep 13 00:16:57.974542 kernel: QLogic iSCSI HBA Driver Sep 13 00:16:58.014601 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 13 00:16:58.028650 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 13 00:16:58.044297 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 13 00:16:58.044377 kernel: device-mapper: uevent: version 1.0.3 Sep 13 00:16:58.044390 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Sep 13 00:16:58.089519 kernel: raid6: neonx8 gen() 15766 MB/s Sep 13 00:16:58.106516 kernel: raid6: neonx4 gen() 15670 MB/s Sep 13 00:16:58.123513 kernel: raid6: neonx2 gen() 13284 MB/s Sep 13 00:16:58.140509 kernel: raid6: neonx1 gen() 10523 MB/s Sep 13 00:16:58.157514 kernel: raid6: int64x8 gen() 6936 MB/s Sep 13 00:16:58.174512 kernel: raid6: int64x4 gen() 7350 MB/s Sep 13 00:16:58.191509 kernel: raid6: int64x2 gen() 6112 MB/s Sep 13 00:16:58.208527 kernel: raid6: int64x1 gen() 5050 MB/s Sep 13 00:16:58.208561 kernel: raid6: using algorithm neonx8 gen() 15766 MB/s Sep 13 00:16:58.225524 kernel: raid6: .... xor() 12010 MB/s, rmw enabled Sep 13 00:16:58.225537 kernel: raid6: using neon recovery algorithm Sep 13 00:16:58.230519 kernel: xor: measuring software checksum speed Sep 13 00:16:58.230533 kernel: 8regs : 19831 MB/sec Sep 13 00:16:58.231601 kernel: 32regs : 19344 MB/sec Sep 13 00:16:58.231614 kernel: arm64_neon : 24638 MB/sec Sep 13 00:16:58.231624 kernel: xor: using function: arm64_neon (24638 MB/sec) Sep 13 00:16:58.280706 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 13 00:16:58.290603 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:16:58.303625 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:16:58.314431 systemd-udevd[462]: Using default interface naming scheme 'v255'. Sep 13 00:16:58.317589 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:16:58.320675 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 13 00:16:58.334002 dracut-pre-trigger[469]: rd.md=0: removing MD RAID activation Sep 13 00:16:58.359119 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:16:58.366705 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:16:58.404638 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:16:58.414647 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 13 00:16:58.424323 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 13 00:16:58.426241 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:16:58.427685 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:16:58.429615 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:16:58.437655 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 13 00:16:58.447242 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:16:58.456922 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 13 00:16:58.457045 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 13 00:16:58.459608 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 13 00:16:58.459636 kernel: GPT:9289727 != 19775487 Sep 13 00:16:58.459646 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 13 00:16:58.461022 kernel: GPT:9289727 != 19775487 Sep 13 00:16:58.461037 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 13 00:16:58.461052 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:16:58.461262 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:16:58.461338 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:16:58.464641 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:16:58.465412 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:16:58.465465 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:16:58.467466 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:16:58.476668 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:16:58.479612 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (515) Sep 13 00:16:58.483542 kernel: BTRFS: device fsid 29bc4da8-c689-46a2-a16a-b7bbc722db77 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (521) Sep 13 00:16:58.487938 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 13 00:16:58.490524 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:16:58.495162 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 13 00:16:58.504800 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:16:58.508336 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 13 00:16:58.509393 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 13 00:16:58.526658 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 13 00:16:58.528652 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 13 00:16:58.532209 disk-uuid[551]: Primary Header is updated. Sep 13 00:16:58.532209 disk-uuid[551]: Secondary Entries is updated. Sep 13 00:16:58.532209 disk-uuid[551]: Secondary Header is updated. Sep 13 00:16:58.535522 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:16:58.538543 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:16:58.541528 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:16:58.549331 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:16:59.543306 disk-uuid[552]: The operation has completed successfully. Sep 13 00:16:59.544395 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 13 00:16:59.563653 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 13 00:16:59.563747 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 13 00:16:59.584667 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 13 00:16:59.587501 sh[573]: Success Sep 13 00:16:59.597536 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Sep 13 00:16:59.631861 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 13 00:16:59.633355 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 13 00:16:59.634156 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 13 00:16:59.643897 kernel: BTRFS info (device dm-0): first mount of filesystem 29bc4da8-c689-46a2-a16a-b7bbc722db77 Sep 13 00:16:59.643938 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:16:59.643949 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Sep 13 00:16:59.645768 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 13 00:16:59.645784 kernel: BTRFS info (device dm-0): using free space tree Sep 13 00:16:59.649228 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 13 00:16:59.650377 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 13 00:16:59.666656 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 13 00:16:59.668784 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 13 00:16:59.675073 kernel: BTRFS info (device vda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:16:59.675109 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:16:59.675133 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:16:59.677522 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:16:59.685263 systemd[1]: mnt-oem.mount: Deactivated successfully. Sep 13 00:16:59.686647 kernel: BTRFS info (device vda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:16:59.693624 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 13 00:16:59.700660 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 13 00:16:59.755221 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:16:59.763649 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:16:59.771475 ignition[669]: Ignition 2.19.0 Sep 13 00:16:59.771486 ignition[669]: Stage: fetch-offline Sep 13 00:16:59.771534 ignition[669]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:16:59.771543 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:16:59.771696 ignition[669]: parsed url from cmdline: "" Sep 13 00:16:59.771699 ignition[669]: no config URL provided Sep 13 00:16:59.771704 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Sep 13 00:16:59.771711 ignition[669]: no config at "/usr/lib/ignition/user.ign" Sep 13 00:16:59.771733 ignition[669]: op(1): [started] loading QEMU firmware config module Sep 13 00:16:59.771738 ignition[669]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 13 00:16:59.778076 ignition[669]: op(1): [finished] loading QEMU firmware config module Sep 13 00:16:59.783079 systemd-networkd[761]: lo: Link UP Sep 13 00:16:59.783093 systemd-networkd[761]: lo: Gained carrier Sep 13 00:16:59.783771 systemd-networkd[761]: Enumeration completed Sep 13 00:16:59.784161 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:16:59.786046 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:16:59.786050 systemd-networkd[761]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:16:59.786668 systemd[1]: Reached target network.target - Network. Sep 13 00:16:59.787162 systemd-networkd[761]: eth0: Link UP Sep 13 00:16:59.787165 systemd-networkd[761]: eth0: Gained carrier Sep 13 00:16:59.787173 systemd-networkd[761]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:16:59.807550 systemd-networkd[761]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:16:59.827472 ignition[669]: parsing config with SHA512: 43628a39fa0cfe3b79b997ed9d849410287cafba371bbfba8fd56f9fbb5f8ebb591c80cb845f0d32b7177d7f5120be56084da202330c222f9332ba732b40a1f7 Sep 13 00:16:59.831954 unknown[669]: fetched base config from "system" Sep 13 00:16:59.831963 unknown[669]: fetched user config from "qemu" Sep 13 00:16:59.832637 ignition[669]: fetch-offline: fetch-offline passed Sep 13 00:16:59.833271 ignition[669]: Ignition finished successfully Sep 13 00:16:59.835552 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:16:59.837454 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 13 00:16:59.847673 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 13 00:16:59.857950 ignition[768]: Ignition 2.19.0 Sep 13 00:16:59.857960 ignition[768]: Stage: kargs Sep 13 00:16:59.858119 ignition[768]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:16:59.858141 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:16:59.859085 ignition[768]: kargs: kargs passed Sep 13 00:16:59.859141 ignition[768]: Ignition finished successfully Sep 13 00:16:59.862552 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 13 00:16:59.864917 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 13 00:16:59.878391 ignition[776]: Ignition 2.19.0 Sep 13 00:16:59.878401 ignition[776]: Stage: disks Sep 13 00:16:59.878588 ignition[776]: no configs at "/usr/lib/ignition/base.d" Sep 13 00:16:59.878598 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:16:59.879552 ignition[776]: disks: disks passed Sep 13 00:16:59.879600 ignition[776]: Ignition finished successfully Sep 13 00:16:59.881760 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 13 00:16:59.882875 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 13 00:16:59.884134 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:16:59.885679 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:16:59.887172 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:16:59.888468 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:16:59.900652 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 13 00:16:59.912084 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks Sep 13 00:16:59.916047 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 13 00:16:59.927609 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 13 00:16:59.970524 kernel: EXT4-fs (vda9): mounted filesystem d35fd879-6758-447b-9fdd-bb21dd7c5b2b r/w with ordered data mode. Quota mode: none. Sep 13 00:16:59.970603 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 13 00:16:59.971681 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 13 00:16:59.982589 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:16:59.984153 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 13 00:16:59.986287 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 13 00:16:59.986337 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 13 00:16:59.986361 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:16:59.992452 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (795) Sep 13 00:16:59.990479 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 13 00:16:59.992566 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 13 00:16:59.996811 kernel: BTRFS info (device vda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:16:59.996831 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:16:59.996848 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:16:59.998511 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:16:59.999979 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:17:00.035656 initrd-setup-root[820]: cut: /sysroot/etc/passwd: No such file or directory Sep 13 00:17:00.040390 initrd-setup-root[827]: cut: /sysroot/etc/group: No such file or directory Sep 13 00:17:00.044603 initrd-setup-root[834]: cut: /sysroot/etc/shadow: No such file or directory Sep 13 00:17:00.048549 initrd-setup-root[841]: cut: /sysroot/etc/gshadow: No such file or directory Sep 13 00:17:00.123185 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 13 00:17:00.135630 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 13 00:17:00.137039 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 13 00:17:00.141520 kernel: BTRFS info (device vda6): last unmount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:17:00.156059 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 13 00:17:00.177958 ignition[912]: INFO : Ignition 2.19.0 Sep 13 00:17:00.177958 ignition[912]: INFO : Stage: mount Sep 13 00:17:00.180474 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:17:00.180474 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:17:00.180474 ignition[912]: INFO : mount: mount passed Sep 13 00:17:00.180474 ignition[912]: INFO : Ignition finished successfully Sep 13 00:17:00.180894 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 13 00:17:00.183416 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 13 00:17:00.643243 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 13 00:17:00.658690 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 13 00:17:00.665795 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (922) Sep 13 00:17:00.665824 kernel: BTRFS info (device vda6): first mount of filesystem abbcf5a1-cc71-42ce-94f9-860f3aeda368 Sep 13 00:17:00.666512 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 13 00:17:00.666526 kernel: BTRFS info (device vda6): using free space tree Sep 13 00:17:00.669522 kernel: BTRFS info (device vda6): auto enabling async discard Sep 13 00:17:00.669996 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 13 00:17:00.701283 ignition[940]: INFO : Ignition 2.19.0 Sep 13 00:17:00.703530 ignition[940]: INFO : Stage: files Sep 13 00:17:00.703530 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:17:00.703530 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:17:00.706491 ignition[940]: DEBUG : files: compiled without relabeling support, skipping Sep 13 00:17:00.708506 ignition[940]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 13 00:17:00.708506 ignition[940]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 13 00:17:00.711585 ignition[940]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 13 00:17:00.711585 ignition[940]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 13 00:17:00.711585 ignition[940]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 13 00:17:00.710930 unknown[940]: wrote ssh authorized keys file for user: core Sep 13 00:17:00.716257 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:17:00.716257 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Sep 13 00:17:00.716257 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:17:00.716257 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Sep 13 00:17:00.952793 systemd-networkd[761]: eth0: Gained IPv6LL Sep 13 00:17:01.072829 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 13 00:17:01.331946 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Sep 13 00:17:01.331946 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:17:01.334684 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 13 00:17:01.535639 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Sep 13 00:17:01.705062 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 13 00:17:01.705062 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Sep 13 00:17:01.707970 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Sep 13 00:17:01.707970 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:17:01.707970 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 13 00:17:01.707970 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:17:01.707970 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 13 00:17:01.707970 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:17:01.707970 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 13 00:17:01.707970 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:17:01.707970 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 13 00:17:01.707970 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:17:01.707970 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:17:01.707970 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:17:01.707970 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Sep 13 00:17:01.920099 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK Sep 13 00:17:02.343262 ignition[940]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Sep 13 00:17:02.343262 ignition[940]: INFO : files: op(d): [started] processing unit "containerd.service" Sep 13 00:17:02.346990 ignition[940]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:17:02.346990 ignition[940]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Sep 13 00:17:02.346990 ignition[940]: INFO : files: op(d): [finished] processing unit "containerd.service" Sep 13 00:17:02.346990 ignition[940]: INFO : files: op(f): [started] processing unit "prepare-helm.service" Sep 13 00:17:02.346990 ignition[940]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:17:02.346990 ignition[940]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 13 00:17:02.346990 ignition[940]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" Sep 13 00:17:02.346990 ignition[940]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" Sep 13 00:17:02.346990 ignition[940]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:17:02.346990 ignition[940]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 13 00:17:02.346990 ignition[940]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" Sep 13 00:17:02.346990 ignition[940]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Sep 13 00:17:02.373275 ignition[940]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:17:02.377470 ignition[940]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 13 00:17:02.377470 ignition[940]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Sep 13 00:17:02.377470 ignition[940]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" Sep 13 00:17:02.377470 ignition[940]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" Sep 13 00:17:02.377470 ignition[940]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:17:02.385706 ignition[940]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 13 00:17:02.385706 ignition[940]: INFO : files: files passed Sep 13 00:17:02.385706 ignition[940]: INFO : Ignition finished successfully Sep 13 00:17:02.381481 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 13 00:17:02.392685 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 13 00:17:02.396343 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 13 00:17:02.397442 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 13 00:17:02.397555 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 13 00:17:02.403392 initrd-setup-root-after-ignition[967]: grep: /sysroot/oem/oem-release: No such file or directory Sep 13 00:17:02.406789 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:17:02.406789 initrd-setup-root-after-ignition[969]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:17:02.410130 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 13 00:17:02.410524 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:17:02.412757 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 13 00:17:02.421694 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 13 00:17:02.440988 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 13 00:17:02.442064 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 13 00:17:02.443738 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 13 00:17:02.445478 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 13 00:17:02.447406 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 13 00:17:02.448236 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 13 00:17:02.464512 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:17:02.466939 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 13 00:17:02.478408 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:17:02.480031 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:17:02.483024 systemd[1]: Stopped target timers.target - Timer Units. Sep 13 00:17:02.484793 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 13 00:17:02.484915 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 13 00:17:02.487444 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 13 00:17:02.489611 systemd[1]: Stopped target basic.target - Basic System. Sep 13 00:17:02.491329 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 13 00:17:02.493085 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 13 00:17:02.495085 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 13 00:17:02.497082 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 13 00:17:02.498952 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 13 00:17:02.500965 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 13 00:17:02.503012 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 13 00:17:02.504767 systemd[1]: Stopped target swap.target - Swaps. Sep 13 00:17:02.506335 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 13 00:17:02.506462 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 13 00:17:02.508876 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:17:02.510905 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:17:02.512840 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 13 00:17:02.513576 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:17:02.514733 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 13 00:17:02.514854 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 13 00:17:02.517698 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 13 00:17:02.517810 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 13 00:17:02.519461 systemd[1]: Stopped target paths.target - Path Units. Sep 13 00:17:02.520933 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 13 00:17:02.521016 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:17:02.522726 systemd[1]: Stopped target slices.target - Slice Units. Sep 13 00:17:02.524321 systemd[1]: Stopped target sockets.target - Socket Units. Sep 13 00:17:02.525712 systemd[1]: iscsid.socket: Deactivated successfully. Sep 13 00:17:02.525802 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 13 00:17:02.527382 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 13 00:17:02.527471 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 13 00:17:02.529446 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 13 00:17:02.529591 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 13 00:17:02.531170 systemd[1]: ignition-files.service: Deactivated successfully. Sep 13 00:17:02.531273 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 13 00:17:02.539708 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 13 00:17:02.541831 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 13 00:17:02.542726 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 13 00:17:02.542912 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:17:02.544487 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 13 00:17:02.544613 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 13 00:17:02.550720 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 13 00:17:02.552580 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 13 00:17:02.555170 ignition[994]: INFO : Ignition 2.19.0 Sep 13 00:17:02.555170 ignition[994]: INFO : Stage: umount Sep 13 00:17:02.557052 ignition[994]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 13 00:17:02.557052 ignition[994]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 13 00:17:02.559255 ignition[994]: INFO : umount: umount passed Sep 13 00:17:02.559255 ignition[994]: INFO : Ignition finished successfully Sep 13 00:17:02.558943 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 13 00:17:02.559884 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 13 00:17:02.559982 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 13 00:17:02.561186 systemd[1]: Stopped target network.target - Network. Sep 13 00:17:02.562473 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 13 00:17:02.562555 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 13 00:17:02.564078 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 13 00:17:02.564129 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 13 00:17:02.566656 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 13 00:17:02.566705 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 13 00:17:02.567670 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 13 00:17:02.567713 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 13 00:17:02.569422 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 13 00:17:02.571224 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 13 00:17:02.577639 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 13 00:17:02.579538 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 13 00:17:02.580547 systemd-networkd[761]: eth0: DHCPv6 lease lost Sep 13 00:17:02.582663 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 13 00:17:02.582764 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 13 00:17:02.585051 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 13 00:17:02.585119 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:17:02.598611 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 13 00:17:02.599590 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 13 00:17:02.599652 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 13 00:17:02.601907 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:17:02.601954 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:17:02.603075 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 13 00:17:02.603132 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 13 00:17:02.605117 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 13 00:17:02.605164 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:17:02.607701 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:17:02.618715 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 13 00:17:02.618836 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 13 00:17:02.625070 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 13 00:17:02.625187 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 13 00:17:02.627094 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 13 00:17:02.627147 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 13 00:17:02.628873 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 13 00:17:02.628993 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:17:02.631122 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 13 00:17:02.631185 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 13 00:17:02.632609 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 13 00:17:02.632638 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:17:02.634505 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 13 00:17:02.634549 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 13 00:17:02.637562 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 13 00:17:02.637606 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 13 00:17:02.640368 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 13 00:17:02.640413 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 13 00:17:02.652671 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 13 00:17:02.653714 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 13 00:17:02.653770 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:17:02.655909 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 13 00:17:02.655952 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:17:02.657952 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 13 00:17:02.657993 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:17:02.660187 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 13 00:17:02.660231 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:17:02.662583 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 13 00:17:02.664524 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 13 00:17:02.667041 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 13 00:17:02.669235 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 13 00:17:02.678128 systemd[1]: Switching root. Sep 13 00:17:02.699090 systemd-journald[236]: Journal stopped Sep 13 00:17:03.454284 systemd-journald[236]: Received SIGTERM from PID 1 (systemd). Sep 13 00:17:03.454341 kernel: SELinux: policy capability network_peer_controls=1 Sep 13 00:17:03.454353 kernel: SELinux: policy capability open_perms=1 Sep 13 00:17:03.454363 kernel: SELinux: policy capability extended_socket_class=1 Sep 13 00:17:03.454379 kernel: SELinux: policy capability always_check_network=0 Sep 13 00:17:03.454388 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 13 00:17:03.454398 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 13 00:17:03.454408 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 13 00:17:03.454417 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 13 00:17:03.454427 kernel: audit: type=1403 audit(1757722622.897:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 13 00:17:03.454437 systemd[1]: Successfully loaded SELinux policy in 39.398ms. Sep 13 00:17:03.454461 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.366ms. Sep 13 00:17:03.454473 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Sep 13 00:17:03.454485 systemd[1]: Detected virtualization kvm. Sep 13 00:17:03.454539 systemd[1]: Detected architecture arm64. Sep 13 00:17:03.454552 systemd[1]: Detected first boot. Sep 13 00:17:03.454562 systemd[1]: Initializing machine ID from VM UUID. Sep 13 00:17:03.454572 zram_generator::config[1059]: No configuration found. Sep 13 00:17:03.454583 systemd[1]: Populated /etc with preset unit settings. Sep 13 00:17:03.454596 systemd[1]: Queued start job for default target multi-user.target. Sep 13 00:17:03.454609 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 13 00:17:03.454624 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 13 00:17:03.454634 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 13 00:17:03.454644 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 13 00:17:03.454654 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 13 00:17:03.454665 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 13 00:17:03.454675 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 13 00:17:03.454686 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 13 00:17:03.454696 systemd[1]: Created slice user.slice - User and Session Slice. Sep 13 00:17:03.454708 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 13 00:17:03.454719 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 13 00:17:03.454729 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 13 00:17:03.454742 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 13 00:17:03.454752 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 13 00:17:03.454763 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 13 00:17:03.454773 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 13 00:17:03.454784 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 13 00:17:03.454794 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 13 00:17:03.454806 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 13 00:17:03.454817 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 13 00:17:03.454829 systemd[1]: Reached target slices.target - Slice Units. Sep 13 00:17:03.454839 systemd[1]: Reached target swap.target - Swaps. Sep 13 00:17:03.454850 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 13 00:17:03.454861 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 13 00:17:03.454871 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 13 00:17:03.454881 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Sep 13 00:17:03.454892 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 13 00:17:03.454903 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 13 00:17:03.454914 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 13 00:17:03.454925 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 13 00:17:03.454935 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 13 00:17:03.454946 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 13 00:17:03.454956 systemd[1]: Mounting media.mount - External Media Directory... Sep 13 00:17:03.454966 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 13 00:17:03.454978 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 13 00:17:03.454989 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 13 00:17:03.455002 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 13 00:17:03.455012 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:17:03.455023 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 13 00:17:03.455034 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 13 00:17:03.455044 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:17:03.455071 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:17:03.455082 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:17:03.455093 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 13 00:17:03.455111 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:17:03.455123 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 13 00:17:03.455134 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Sep 13 00:17:03.455145 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Sep 13 00:17:03.455155 kernel: fuse: init (API version 7.39) Sep 13 00:17:03.455165 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 13 00:17:03.455175 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 13 00:17:03.455191 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 13 00:17:03.455201 kernel: loop: module loaded Sep 13 00:17:03.455212 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 13 00:17:03.455223 kernel: ACPI: bus type drm_connector registered Sep 13 00:17:03.455233 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 13 00:17:03.455243 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 13 00:17:03.455253 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 13 00:17:03.455264 systemd[1]: Mounted media.mount - External Media Directory. Sep 13 00:17:03.455348 systemd-journald[1139]: Collecting audit messages is disabled. Sep 13 00:17:03.455421 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 13 00:17:03.455436 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 13 00:17:03.455450 systemd-journald[1139]: Journal started Sep 13 00:17:03.455471 systemd-journald[1139]: Runtime Journal (/run/log/journal/99748fb3e9834c8b91d39f178b6015e6) is 5.9M, max 47.3M, 41.4M free. Sep 13 00:17:03.456779 systemd[1]: Started systemd-journald.service - Journal Service. Sep 13 00:17:03.459050 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 13 00:17:03.460216 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 13 00:17:03.461450 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 13 00:17:03.461624 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 13 00:17:03.462708 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:17:03.462849 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:17:03.463912 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:17:03.464053 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:17:03.465264 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:17:03.465447 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:17:03.467007 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 13 00:17:03.467170 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 13 00:17:03.468366 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:17:03.468565 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:17:03.469693 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 13 00:17:03.470974 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 13 00:17:03.472292 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 13 00:17:03.473816 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 13 00:17:03.484764 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 13 00:17:03.493618 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 13 00:17:03.495450 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 13 00:17:03.496432 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 13 00:17:03.498676 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 13 00:17:03.504709 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 13 00:17:03.506138 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:17:03.509662 systemd-journald[1139]: Time spent on flushing to /var/log/journal/99748fb3e9834c8b91d39f178b6015e6 is 11.588ms for 846 entries. Sep 13 00:17:03.509662 systemd-journald[1139]: System Journal (/var/log/journal/99748fb3e9834c8b91d39f178b6015e6) is 8.0M, max 195.6M, 187.6M free. Sep 13 00:17:03.541953 systemd-journald[1139]: Received client request to flush runtime journal. Sep 13 00:17:03.509696 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 13 00:17:03.511540 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:17:03.513660 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:17:03.516688 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 13 00:17:03.519243 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 13 00:17:03.520675 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 13 00:17:03.522012 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 13 00:17:03.534253 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 13 00:17:03.536084 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 13 00:17:03.540744 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Sep 13 00:17:03.542030 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:17:03.545637 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 13 00:17:03.555646 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Sep 13 00:17:03.555754 udevadm[1200]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Sep 13 00:17:03.556171 systemd-tmpfiles[1191]: ACLs are not supported, ignoring. Sep 13 00:17:03.560713 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 13 00:17:03.572790 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 13 00:17:03.594006 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 13 00:17:03.605768 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 13 00:17:03.619052 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Sep 13 00:17:03.619075 systemd-tmpfiles[1213]: ACLs are not supported, ignoring. Sep 13 00:17:03.622969 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 13 00:17:03.961239 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 13 00:17:03.977722 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 13 00:17:03.996459 systemd-udevd[1219]: Using default interface naming scheme 'v255'. Sep 13 00:17:04.012829 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 13 00:17:04.020663 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 13 00:17:04.036710 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 13 00:17:04.041322 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. Sep 13 00:17:04.073696 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 13 00:17:04.078588 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1225) Sep 13 00:17:04.131572 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 13 00:17:04.142492 systemd-networkd[1228]: lo: Link UP Sep 13 00:17:04.142520 systemd-networkd[1228]: lo: Gained carrier Sep 13 00:17:04.143198 systemd-networkd[1228]: Enumeration completed Sep 13 00:17:04.143655 systemd-networkd[1228]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:17:04.143658 systemd-networkd[1228]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 13 00:17:04.144199 systemd-networkd[1228]: eth0: Link UP Sep 13 00:17:04.144202 systemd-networkd[1228]: eth0: Gained carrier Sep 13 00:17:04.144214 systemd-networkd[1228]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 13 00:17:04.144755 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 13 00:17:04.146018 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 13 00:17:04.148890 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 13 00:17:04.156581 systemd-networkd[1228]: eth0: DHCPv4 address 10.0.0.134/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 13 00:17:04.157536 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Sep 13 00:17:04.173833 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Sep 13 00:17:04.184628 lvm[1258]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:17:04.183659 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 13 00:17:04.213909 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Sep 13 00:17:04.215164 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 13 00:17:04.227634 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Sep 13 00:17:04.230917 lvm[1265]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Sep 13 00:17:04.262961 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Sep 13 00:17:04.264156 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 13 00:17:04.265170 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 13 00:17:04.265201 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 13 00:17:04.265986 systemd[1]: Reached target machines.target - Containers. Sep 13 00:17:04.267703 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Sep 13 00:17:04.279698 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 13 00:17:04.282293 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 13 00:17:04.285212 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:17:04.286269 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 13 00:17:04.288775 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Sep 13 00:17:04.294820 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 13 00:17:04.301037 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 13 00:17:04.304384 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 13 00:17:04.311526 kernel: loop0: detected capacity change from 0 to 114328 Sep 13 00:17:04.315595 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 13 00:17:04.316237 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Sep 13 00:17:04.326624 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 13 00:17:04.369527 kernel: loop1: detected capacity change from 0 to 114432 Sep 13 00:17:04.432523 kernel: loop2: detected capacity change from 0 to 203944 Sep 13 00:17:04.479591 kernel: loop3: detected capacity change from 0 to 114328 Sep 13 00:17:04.490512 kernel: loop4: detected capacity change from 0 to 114432 Sep 13 00:17:04.495516 kernel: loop5: detected capacity change from 0 to 203944 Sep 13 00:17:04.499826 (sd-merge)[1287]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 13 00:17:04.500216 (sd-merge)[1287]: Merged extensions into '/usr'. Sep 13 00:17:04.504984 systemd[1]: Reloading requested from client PID 1273 ('systemd-sysext') (unit systemd-sysext.service)... Sep 13 00:17:04.505419 systemd[1]: Reloading... Sep 13 00:17:04.558520 zram_generator::config[1314]: No configuration found. Sep 13 00:17:04.562963 ldconfig[1269]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 13 00:17:04.658723 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:17:04.700576 systemd[1]: Reloading finished in 194 ms. Sep 13 00:17:04.716180 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 13 00:17:04.717419 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 13 00:17:04.735644 systemd[1]: Starting ensure-sysext.service... Sep 13 00:17:04.737390 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 13 00:17:04.740664 systemd[1]: Reloading requested from client PID 1358 ('systemctl') (unit ensure-sysext.service)... Sep 13 00:17:04.740680 systemd[1]: Reloading... Sep 13 00:17:04.752923 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 13 00:17:04.753217 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 13 00:17:04.753864 systemd-tmpfiles[1359]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 13 00:17:04.754105 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Sep 13 00:17:04.754155 systemd-tmpfiles[1359]: ACLs are not supported, ignoring. Sep 13 00:17:04.757330 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:17:04.757344 systemd-tmpfiles[1359]: Skipping /boot Sep 13 00:17:04.763965 systemd-tmpfiles[1359]: Detected autofs mount point /boot during canonicalization of boot. Sep 13 00:17:04.763984 systemd-tmpfiles[1359]: Skipping /boot Sep 13 00:17:04.778531 zram_generator::config[1387]: No configuration found. Sep 13 00:17:04.870489 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:17:04.913231 systemd[1]: Reloading finished in 172 ms. Sep 13 00:17:04.929346 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 13 00:17:04.962233 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:17:04.964596 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 13 00:17:04.966672 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 13 00:17:04.969772 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 13 00:17:04.974753 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 13 00:17:04.977707 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:17:04.982701 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:17:04.986097 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:17:04.990408 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:17:04.992418 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:17:04.993971 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 13 00:17:04.995862 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:17:04.996001 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:17:04.997572 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:17:04.997764 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:17:04.999411 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:17:04.999758 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:17:05.008292 augenrules[1461]: No rules Sep 13 00:17:05.013896 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:17:05.018666 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 13 00:17:05.021388 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 13 00:17:05.024712 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:17:05.026041 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:17:05.028238 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:17:05.032753 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:17:05.033826 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:17:05.036880 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 13 00:17:05.038018 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:17:05.038869 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:17:05.039017 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:17:05.040537 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:17:05.040684 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:17:05.042427 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:17:05.042682 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:17:05.048296 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 13 00:17:05.058700 systemd-resolved[1433]: Positive Trust Anchors: Sep 13 00:17:05.058721 systemd-resolved[1433]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 13 00:17:05.058754 systemd-resolved[1433]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 13 00:17:05.059715 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 13 00:17:05.063666 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 13 00:17:05.064716 systemd-resolved[1433]: Defaulting to hostname 'linux'. Sep 13 00:17:05.065582 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 13 00:17:05.067594 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 13 00:17:05.068677 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 13 00:17:05.068739 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 13 00:17:05.069073 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 13 00:17:05.070653 systemd[1]: Finished ensure-sysext.service. Sep 13 00:17:05.071847 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 13 00:17:05.073316 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 13 00:17:05.073457 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 13 00:17:05.074898 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 13 00:17:05.075041 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 13 00:17:05.076449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 13 00:17:05.076600 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 13 00:17:05.078322 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 13 00:17:05.078528 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 13 00:17:05.084539 systemd[1]: Reached target network.target - Network. Sep 13 00:17:05.085436 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 13 00:17:05.086844 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 13 00:17:05.086917 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 13 00:17:05.097696 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 13 00:17:05.137475 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 13 00:17:05.138260 systemd-timesyncd[1501]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 13 00:17:05.138311 systemd-timesyncd[1501]: Initial clock synchronization to Sat 2025-09-13 00:17:05.119476 UTC. Sep 13 00:17:05.139574 systemd[1]: Reached target sysinit.target - System Initialization. Sep 13 00:17:05.140763 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 13 00:17:05.142119 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 13 00:17:05.143583 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 13 00:17:05.144927 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 13 00:17:05.145042 systemd[1]: Reached target paths.target - Path Units. Sep 13 00:17:05.146048 systemd[1]: Reached target time-set.target - System Time Set. Sep 13 00:17:05.147356 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 13 00:17:05.148654 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 13 00:17:05.149948 systemd[1]: Reached target timers.target - Timer Units. Sep 13 00:17:05.151707 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 13 00:17:05.154198 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 13 00:17:05.156349 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 13 00:17:05.158405 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 13 00:17:05.159558 systemd[1]: Reached target sockets.target - Socket Units. Sep 13 00:17:05.160487 systemd[1]: Reached target basic.target - Basic System. Sep 13 00:17:05.161574 systemd[1]: System is tainted: cgroupsv1 Sep 13 00:17:05.161619 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:17:05.161639 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 13 00:17:05.162753 systemd[1]: Starting containerd.service - containerd container runtime... Sep 13 00:17:05.164762 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 13 00:17:05.166672 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 13 00:17:05.169655 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 13 00:17:05.171693 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 13 00:17:05.174328 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 13 00:17:05.174853 jq[1507]: false Sep 13 00:17:05.177546 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 13 00:17:05.182928 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 13 00:17:05.185679 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 13 00:17:05.193264 dbus-daemon[1506]: [system] SELinux support is enabled Sep 13 00:17:05.195474 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 13 00:17:05.196795 extend-filesystems[1509]: Found loop3 Sep 13 00:17:05.198245 extend-filesystems[1509]: Found loop4 Sep 13 00:17:05.198245 extend-filesystems[1509]: Found loop5 Sep 13 00:17:05.198245 extend-filesystems[1509]: Found vda Sep 13 00:17:05.198245 extend-filesystems[1509]: Found vda1 Sep 13 00:17:05.198245 extend-filesystems[1509]: Found vda2 Sep 13 00:17:05.198245 extend-filesystems[1509]: Found vda3 Sep 13 00:17:05.198245 extend-filesystems[1509]: Found usr Sep 13 00:17:05.198245 extend-filesystems[1509]: Found vda4 Sep 13 00:17:05.198245 extend-filesystems[1509]: Found vda6 Sep 13 00:17:05.198245 extend-filesystems[1509]: Found vda7 Sep 13 00:17:05.198245 extend-filesystems[1509]: Found vda9 Sep 13 00:17:05.198245 extend-filesystems[1509]: Checking size of /dev/vda9 Sep 13 00:17:05.198302 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 13 00:17:05.201397 systemd[1]: Starting update-engine.service - Update Engine... Sep 13 00:17:05.206325 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 13 00:17:05.207941 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 13 00:17:05.218049 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 13 00:17:05.218309 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 13 00:17:05.218582 systemd[1]: motdgen.service: Deactivated successfully. Sep 13 00:17:05.218789 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 13 00:17:05.222827 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 13 00:17:05.223056 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 13 00:17:05.234790 jq[1530]: true Sep 13 00:17:05.236977 extend-filesystems[1509]: Resized partition /dev/vda9 Sep 13 00:17:05.241519 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1227) Sep 13 00:17:05.241637 update_engine[1526]: I20250913 00:17:05.241240 1526 main.cc:92] Flatcar Update Engine starting Sep 13 00:17:05.244391 update_engine[1526]: I20250913 00:17:05.243988 1526 update_check_scheduler.cc:74] Next update check in 8m34s Sep 13 00:17:05.246509 extend-filesystems[1547]: resize2fs 1.47.1 (20-May-2024) Sep 13 00:17:05.250586 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 13 00:17:05.248016 (ntainerd)[1537]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 13 00:17:05.253161 systemd[1]: Started update-engine.service - Update Engine. Sep 13 00:17:05.253295 jq[1543]: true Sep 13 00:17:05.262093 tar[1535]: linux-arm64/helm Sep 13 00:17:05.265459 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 13 00:17:05.265971 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 13 00:17:05.267632 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 13 00:17:05.267660 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 13 00:17:05.269311 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 13 00:17:05.270953 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 13 00:17:05.274530 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 13 00:17:05.292867 systemd-logind[1524]: Watching system buttons on /dev/input/event0 (Power Button) Sep 13 00:17:05.294058 systemd-logind[1524]: New seat seat0. Sep 13 00:17:05.294414 extend-filesystems[1547]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 13 00:17:05.294414 extend-filesystems[1547]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 13 00:17:05.294414 extend-filesystems[1547]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 13 00:17:05.312671 extend-filesystems[1509]: Resized filesystem in /dev/vda9 Sep 13 00:17:05.313389 bash[1567]: Updated "/home/core/.ssh/authorized_keys" Sep 13 00:17:05.296017 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 13 00:17:05.296279 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 13 00:17:05.311181 systemd[1]: Started systemd-logind.service - User Login Management. Sep 13 00:17:05.314786 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 13 00:17:05.317651 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 13 00:17:05.322196 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 13 00:17:05.411517 containerd[1537]: time="2025-09-13T00:17:05.411041920Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Sep 13 00:17:05.439831 containerd[1537]: time="2025-09-13T00:17:05.439773480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:17:05.441142 containerd[1537]: time="2025-09-13T00:17:05.441106880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.106-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:17:05.441142 containerd[1537]: time="2025-09-13T00:17:05.441138960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Sep 13 00:17:05.441206 containerd[1537]: time="2025-09-13T00:17:05.441155280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Sep 13 00:17:05.441324 containerd[1537]: time="2025-09-13T00:17:05.441305320Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Sep 13 00:17:05.441370 containerd[1537]: time="2025-09-13T00:17:05.441327840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Sep 13 00:17:05.441402 containerd[1537]: time="2025-09-13T00:17:05.441385240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:17:05.441426 containerd[1537]: time="2025-09-13T00:17:05.441400920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:17:05.441657 containerd[1537]: time="2025-09-13T00:17:05.441634240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:17:05.441657 containerd[1537]: time="2025-09-13T00:17:05.441655560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Sep 13 00:17:05.441716 containerd[1537]: time="2025-09-13T00:17:05.441669200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:17:05.441716 containerd[1537]: time="2025-09-13T00:17:05.441679200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Sep 13 00:17:05.441758 containerd[1537]: time="2025-09-13T00:17:05.441748480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:17:05.441962 containerd[1537]: time="2025-09-13T00:17:05.441921280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Sep 13 00:17:05.442066 containerd[1537]: time="2025-09-13T00:17:05.442044640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Sep 13 00:17:05.442066 containerd[1537]: time="2025-09-13T00:17:05.442063520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Sep 13 00:17:05.442183 containerd[1537]: time="2025-09-13T00:17:05.442144960Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Sep 13 00:17:05.442211 containerd[1537]: time="2025-09-13T00:17:05.442192920Z" level=info msg="metadata content store policy set" policy=shared Sep 13 00:17:05.466938 containerd[1537]: time="2025-09-13T00:17:05.466898840Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Sep 13 00:17:05.467017 containerd[1537]: time="2025-09-13T00:17:05.466962680Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Sep 13 00:17:05.467017 containerd[1537]: time="2025-09-13T00:17:05.466980520Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Sep 13 00:17:05.467017 containerd[1537]: time="2025-09-13T00:17:05.466996000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Sep 13 00:17:05.467017 containerd[1537]: time="2025-09-13T00:17:05.467010800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Sep 13 00:17:05.467300 containerd[1537]: time="2025-09-13T00:17:05.467166080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Sep 13 00:17:05.468090 containerd[1537]: time="2025-09-13T00:17:05.468055640Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Sep 13 00:17:05.468254 containerd[1537]: time="2025-09-13T00:17:05.468232080Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Sep 13 00:17:05.468294 containerd[1537]: time="2025-09-13T00:17:05.468255640Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Sep 13 00:17:05.468294 containerd[1537]: time="2025-09-13T00:17:05.468280000Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Sep 13 00:17:05.468330 containerd[1537]: time="2025-09-13T00:17:05.468295080Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Sep 13 00:17:05.468330 containerd[1537]: time="2025-09-13T00:17:05.468308360Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Sep 13 00:17:05.468330 containerd[1537]: time="2025-09-13T00:17:05.468321600Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Sep 13 00:17:05.468379 containerd[1537]: time="2025-09-13T00:17:05.468336360Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Sep 13 00:17:05.468379 containerd[1537]: time="2025-09-13T00:17:05.468358880Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Sep 13 00:17:05.468379 containerd[1537]: time="2025-09-13T00:17:05.468372080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Sep 13 00:17:05.468433 containerd[1537]: time="2025-09-13T00:17:05.468385560Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Sep 13 00:17:05.468433 containerd[1537]: time="2025-09-13T00:17:05.468397800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Sep 13 00:17:05.468433 containerd[1537]: time="2025-09-13T00:17:05.468418680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468480 containerd[1537]: time="2025-09-13T00:17:05.468440960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468480 containerd[1537]: time="2025-09-13T00:17:05.468453520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468480 containerd[1537]: time="2025-09-13T00:17:05.468466320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468480 containerd[1537]: time="2025-09-13T00:17:05.468478320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468582 containerd[1537]: time="2025-09-13T00:17:05.468490880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468582 containerd[1537]: time="2025-09-13T00:17:05.468513920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468582 containerd[1537]: time="2025-09-13T00:17:05.468526960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468582 containerd[1537]: time="2025-09-13T00:17:05.468539880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468582 containerd[1537]: time="2025-09-13T00:17:05.468553800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468582 containerd[1537]: time="2025-09-13T00:17:05.468564720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468582 containerd[1537]: time="2025-09-13T00:17:05.468582360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468702 containerd[1537]: time="2025-09-13T00:17:05.468596120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468702 containerd[1537]: time="2025-09-13T00:17:05.468611800Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Sep 13 00:17:05.468702 containerd[1537]: time="2025-09-13T00:17:05.468631520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468702 containerd[1537]: time="2025-09-13T00:17:05.468642640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468702 containerd[1537]: time="2025-09-13T00:17:05.468660280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Sep 13 00:17:05.468785 containerd[1537]: time="2025-09-13T00:17:05.468776880Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Sep 13 00:17:05.468803 containerd[1537]: time="2025-09-13T00:17:05.468794800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Sep 13 00:17:05.468825 containerd[1537]: time="2025-09-13T00:17:05.468806160Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Sep 13 00:17:05.468844 containerd[1537]: time="2025-09-13T00:17:05.468828640Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Sep 13 00:17:05.468844 containerd[1537]: time="2025-09-13T00:17:05.468839480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.468878 containerd[1537]: time="2025-09-13T00:17:05.468856600Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Sep 13 00:17:05.468878 containerd[1537]: time="2025-09-13T00:17:05.468867120Z" level=info msg="NRI interface is disabled by configuration." Sep 13 00:17:05.468911 containerd[1537]: time="2025-09-13T00:17:05.468877840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Sep 13 00:17:05.470678 containerd[1537]: time="2025-09-13T00:17:05.470169640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Sep 13 00:17:05.470678 containerd[1537]: time="2025-09-13T00:17:05.470252440Z" level=info msg="Connect containerd service" Sep 13 00:17:05.470678 containerd[1537]: time="2025-09-13T00:17:05.470294240Z" level=info msg="using legacy CRI server" Sep 13 00:17:05.470678 containerd[1537]: time="2025-09-13T00:17:05.470302840Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 13 00:17:05.470678 containerd[1537]: time="2025-09-13T00:17:05.470404840Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Sep 13 00:17:05.471307 containerd[1537]: time="2025-09-13T00:17:05.471274720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:17:05.471486 containerd[1537]: time="2025-09-13T00:17:05.471457200Z" level=info msg="Start subscribing containerd event" Sep 13 00:17:05.473693 containerd[1537]: time="2025-09-13T00:17:05.473649520Z" level=info msg="Start recovering state" Sep 13 00:17:05.473742 containerd[1537]: time="2025-09-13T00:17:05.473731640Z" level=info msg="Start event monitor" Sep 13 00:17:05.473761 containerd[1537]: time="2025-09-13T00:17:05.473744640Z" level=info msg="Start snapshots syncer" Sep 13 00:17:05.473761 containerd[1537]: time="2025-09-13T00:17:05.473754720Z" level=info msg="Start cni network conf syncer for default" Sep 13 00:17:05.473795 containerd[1537]: time="2025-09-13T00:17:05.473762200Z" level=info msg="Start streaming server" Sep 13 00:17:05.475536 containerd[1537]: time="2025-09-13T00:17:05.474155360Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 13 00:17:05.475536 containerd[1537]: time="2025-09-13T00:17:05.474212200Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 13 00:17:05.474377 systemd[1]: Started containerd.service - containerd container runtime. Sep 13 00:17:05.475774 containerd[1537]: time="2025-09-13T00:17:05.475747760Z" level=info msg="containerd successfully booted in 0.066259s" Sep 13 00:17:05.625822 tar[1535]: linux-arm64/LICENSE Sep 13 00:17:05.625822 tar[1535]: linux-arm64/README.md Sep 13 00:17:05.643231 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 13 00:17:05.654660 sshd_keygen[1531]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 13 00:17:05.673271 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 13 00:17:05.687892 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 13 00:17:05.693114 systemd[1]: issuegen.service: Deactivated successfully. Sep 13 00:17:05.693354 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 13 00:17:05.696099 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 13 00:17:05.707198 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 13 00:17:05.709924 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 13 00:17:05.711966 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 13 00:17:05.713512 systemd[1]: Reached target getty.target - Login Prompts. Sep 13 00:17:05.944660 systemd-networkd[1228]: eth0: Gained IPv6LL Sep 13 00:17:05.947224 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 13 00:17:05.949068 systemd[1]: Reached target network-online.target - Network is Online. Sep 13 00:17:05.956717 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 13 00:17:05.959262 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:17:05.961475 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 13 00:17:05.978528 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 13 00:17:05.980192 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 13 00:17:05.980416 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 13 00:17:05.982207 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 13 00:17:06.539899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:17:06.541538 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 13 00:17:06.543402 systemd[1]: Startup finished in 5.712s (kernel) + 3.685s (userspace) = 9.397s. Sep 13 00:17:06.543674 (kubelet)[1641]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:17:06.940544 kubelet[1641]: E0913 00:17:06.940375 1641 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:17:06.942620 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:17:06.942819 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:17:10.711750 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 13 00:17:10.725757 systemd[1]: Started sshd@0-10.0.0.134:22-10.0.0.1:54848.service - OpenSSH per-connection server daemon (10.0.0.1:54848). Sep 13 00:17:10.764530 sshd[1654]: Accepted publickey for core from 10.0.0.1 port 54848 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:10.765101 sshd[1654]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:10.771704 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 13 00:17:10.784691 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 13 00:17:10.786293 systemd-logind[1524]: New session 1 of user core. Sep 13 00:17:10.793126 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 13 00:17:10.795228 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 13 00:17:10.801252 (systemd)[1660]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 13 00:17:10.870868 systemd[1660]: Queued start job for default target default.target. Sep 13 00:17:10.871228 systemd[1660]: Created slice app.slice - User Application Slice. Sep 13 00:17:10.871250 systemd[1660]: Reached target paths.target - Paths. Sep 13 00:17:10.871261 systemd[1660]: Reached target timers.target - Timers. Sep 13 00:17:10.881589 systemd[1660]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 13 00:17:10.886791 systemd[1660]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 13 00:17:10.886849 systemd[1660]: Reached target sockets.target - Sockets. Sep 13 00:17:10.886860 systemd[1660]: Reached target basic.target - Basic System. Sep 13 00:17:10.886894 systemd[1660]: Reached target default.target - Main User Target. Sep 13 00:17:10.886916 systemd[1660]: Startup finished in 81ms. Sep 13 00:17:10.887161 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 13 00:17:10.888374 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 13 00:17:10.946806 systemd[1]: Started sshd@1-10.0.0.134:22-10.0.0.1:54860.service - OpenSSH per-connection server daemon (10.0.0.1:54860). Sep 13 00:17:10.983313 sshd[1672]: Accepted publickey for core from 10.0.0.1 port 54860 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:10.984456 sshd[1672]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:10.988555 systemd-logind[1524]: New session 2 of user core. Sep 13 00:17:11.004778 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 13 00:17:11.056536 sshd[1672]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:11.071756 systemd[1]: Started sshd@2-10.0.0.134:22-10.0.0.1:54866.service - OpenSSH per-connection server daemon (10.0.0.1:54866). Sep 13 00:17:11.072245 systemd[1]: sshd@1-10.0.0.134:22-10.0.0.1:54860.service: Deactivated successfully. Sep 13 00:17:11.073818 systemd-logind[1524]: Session 2 logged out. Waiting for processes to exit. Sep 13 00:17:11.074471 systemd[1]: session-2.scope: Deactivated successfully. Sep 13 00:17:11.075650 systemd-logind[1524]: Removed session 2. Sep 13 00:17:11.100207 sshd[1677]: Accepted publickey for core from 10.0.0.1 port 54866 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:11.101373 sshd[1677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:11.105572 systemd-logind[1524]: New session 3 of user core. Sep 13 00:17:11.115717 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 13 00:17:11.165789 sshd[1677]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:11.177813 systemd[1]: Started sshd@3-10.0.0.134:22-10.0.0.1:54870.service - OpenSSH per-connection server daemon (10.0.0.1:54870). Sep 13 00:17:11.178484 systemd[1]: sshd@2-10.0.0.134:22-10.0.0.1:54866.service: Deactivated successfully. Sep 13 00:17:11.180032 systemd[1]: session-3.scope: Deactivated successfully. Sep 13 00:17:11.180696 systemd-logind[1524]: Session 3 logged out. Waiting for processes to exit. Sep 13 00:17:11.182655 systemd-logind[1524]: Removed session 3. Sep 13 00:17:11.207104 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 54870 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:11.208389 sshd[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:11.213769 systemd-logind[1524]: New session 4 of user core. Sep 13 00:17:11.219781 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 13 00:17:11.275082 sshd[1685]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:11.285740 systemd[1]: Started sshd@4-10.0.0.134:22-10.0.0.1:54872.service - OpenSSH per-connection server daemon (10.0.0.1:54872). Sep 13 00:17:11.286098 systemd[1]: sshd@3-10.0.0.134:22-10.0.0.1:54870.service: Deactivated successfully. Sep 13 00:17:11.288459 systemd[1]: session-4.scope: Deactivated successfully. Sep 13 00:17:11.288553 systemd-logind[1524]: Session 4 logged out. Waiting for processes to exit. Sep 13 00:17:11.289591 systemd-logind[1524]: Removed session 4. Sep 13 00:17:11.317532 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 54872 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:11.318942 sshd[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:11.322717 systemd-logind[1524]: New session 5 of user core. Sep 13 00:17:11.333758 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 13 00:17:11.390929 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 13 00:17:11.391207 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:17:11.405513 sudo[1700]: pam_unix(sudo:session): session closed for user root Sep 13 00:17:11.407342 sshd[1693]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:11.415791 systemd[1]: Started sshd@5-10.0.0.134:22-10.0.0.1:54888.service - OpenSSH per-connection server daemon (10.0.0.1:54888). Sep 13 00:17:11.416155 systemd[1]: sshd@4-10.0.0.134:22-10.0.0.1:54872.service: Deactivated successfully. Sep 13 00:17:11.418544 systemd[1]: session-5.scope: Deactivated successfully. Sep 13 00:17:11.418823 systemd-logind[1524]: Session 5 logged out. Waiting for processes to exit. Sep 13 00:17:11.419994 systemd-logind[1524]: Removed session 5. Sep 13 00:17:11.443965 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 54888 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:11.445166 sshd[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:11.449074 systemd-logind[1524]: New session 6 of user core. Sep 13 00:17:11.463754 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 13 00:17:11.516102 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 13 00:17:11.516376 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:17:11.519310 sudo[1710]: pam_unix(sudo:session): session closed for user root Sep 13 00:17:11.524002 sudo[1709]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Sep 13 00:17:11.524258 sudo[1709]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:17:11.538732 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Sep 13 00:17:11.539979 auditctl[1713]: No rules Sep 13 00:17:11.540784 systemd[1]: audit-rules.service: Deactivated successfully. Sep 13 00:17:11.541014 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Sep 13 00:17:11.542624 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Sep 13 00:17:11.564645 augenrules[1732]: No rules Sep 13 00:17:11.565921 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Sep 13 00:17:11.567458 sudo[1709]: pam_unix(sudo:session): session closed for user root Sep 13 00:17:11.569204 sshd[1702]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:11.577724 systemd[1]: Started sshd@6-10.0.0.134:22-10.0.0.1:54900.service - OpenSSH per-connection server daemon (10.0.0.1:54900). Sep 13 00:17:11.578061 systemd[1]: sshd@5-10.0.0.134:22-10.0.0.1:54888.service: Deactivated successfully. Sep 13 00:17:11.580344 systemd[1]: session-6.scope: Deactivated successfully. Sep 13 00:17:11.580515 systemd-logind[1524]: Session 6 logged out. Waiting for processes to exit. Sep 13 00:17:11.581536 systemd-logind[1524]: Removed session 6. Sep 13 00:17:11.607192 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 54900 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:17:11.608280 sshd[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:17:11.611563 systemd-logind[1524]: New session 7 of user core. Sep 13 00:17:11.620763 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 13 00:17:11.671118 sudo[1745]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 13 00:17:11.671412 sudo[1745]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 13 00:17:11.924885 (dockerd)[1764]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 13 00:17:11.925160 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 13 00:17:12.131177 dockerd[1764]: time="2025-09-13T00:17:12.131116740Z" level=info msg="Starting up" Sep 13 00:17:12.416209 dockerd[1764]: time="2025-09-13T00:17:12.416085210Z" level=info msg="Loading containers: start." Sep 13 00:17:12.496518 kernel: Initializing XFRM netlink socket Sep 13 00:17:12.558049 systemd-networkd[1228]: docker0: Link UP Sep 13 00:17:12.579113 dockerd[1764]: time="2025-09-13T00:17:12.578917323Z" level=info msg="Loading containers: done." Sep 13 00:17:12.590981 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck799893407-merged.mount: Deactivated successfully. Sep 13 00:17:12.594327 dockerd[1764]: time="2025-09-13T00:17:12.594271609Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 13 00:17:12.594422 dockerd[1764]: time="2025-09-13T00:17:12.594401226Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Sep 13 00:17:12.594573 dockerd[1764]: time="2025-09-13T00:17:12.594554584Z" level=info msg="Daemon has completed initialization" Sep 13 00:17:12.628756 dockerd[1764]: time="2025-09-13T00:17:12.628576203Z" level=info msg="API listen on /run/docker.sock" Sep 13 00:17:12.628853 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 13 00:17:13.195722 containerd[1537]: time="2025-09-13T00:17:13.195687368Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\"" Sep 13 00:17:13.811490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1626493088.mount: Deactivated successfully. Sep 13 00:17:14.730592 containerd[1537]: time="2025-09-13T00:17:14.730546098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:14.731810 containerd[1537]: time="2025-09-13T00:17:14.731780676Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.13: active requests=0, bytes read=25687327" Sep 13 00:17:14.733066 containerd[1537]: time="2025-09-13T00:17:14.732448570Z" level=info msg="ImageCreate event name:\"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:14.737104 containerd[1537]: time="2025-09-13T00:17:14.737064908Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:14.738211 containerd[1537]: time="2025-09-13T00:17:14.738174373Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.13\" with image id \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9abeb8a2d3e53e356d1f2e5d5dc2081cf28f23242651b0552c9e38f4a7ae960e\", size \"25683924\" in 1.542442638s" Sep 13 00:17:14.738257 containerd[1537]: time="2025-09-13T00:17:14.738214185Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.13\" returns image reference \"sha256:0b1c07d8fd4a3526d5c44502e682df3627a3b01c1e608e5e24c3519c8fb337b6\"" Sep 13 00:17:14.739685 containerd[1537]: time="2025-09-13T00:17:14.739662934Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\"" Sep 13 00:17:15.887484 containerd[1537]: time="2025-09-13T00:17:15.886974463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:15.887484 containerd[1537]: time="2025-09-13T00:17:15.887437520Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.13: active requests=0, bytes read=22459769" Sep 13 00:17:15.888392 containerd[1537]: time="2025-09-13T00:17:15.888364873Z" level=info msg="ImageCreate event name:\"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:15.892533 containerd[1537]: time="2025-09-13T00:17:15.891295076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:15.892621 containerd[1537]: time="2025-09-13T00:17:15.892549735Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.13\" with image id \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:facc91288697a288a691520949fe4eec40059ef065c89da8e10481d14e131b09\", size \"24028542\" in 1.152854743s" Sep 13 00:17:15.892621 containerd[1537]: time="2025-09-13T00:17:15.892576118Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.13\" returns image reference \"sha256:c359cb88f3d2147f2cb4c5ada4fbdeadc4b1c009d66c8f33f3856efaf04ee6ef\"" Sep 13 00:17:15.893212 containerd[1537]: time="2025-09-13T00:17:15.893189676Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\"" Sep 13 00:17:16.855515 containerd[1537]: time="2025-09-13T00:17:16.855450513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:16.856939 containerd[1537]: time="2025-09-13T00:17:16.856904941Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.13: active requests=0, bytes read=17127508" Sep 13 00:17:16.857909 containerd[1537]: time="2025-09-13T00:17:16.857861594Z" level=info msg="ImageCreate event name:\"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:16.860865 containerd[1537]: time="2025-09-13T00:17:16.860828414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:16.863178 containerd[1537]: time="2025-09-13T00:17:16.863133840Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.13\" with image id \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c5ce150dcce2419fdef9f9875fef43014355ccebf937846ed3a2971953f9b241\", size \"18696299\" in 969.910625ms" Sep 13 00:17:16.863273 containerd[1537]: time="2025-09-13T00:17:16.863183090Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.13\" returns image reference \"sha256:5e3cbe2ba7db787c6aebfcf4484156dd4ebd7ede811ef72e8929593e59a5fa27\"" Sep 13 00:17:16.863645 containerd[1537]: time="2025-09-13T00:17:16.863597396Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\"" Sep 13 00:17:17.193125 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 13 00:17:17.204737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:17:17.306255 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:17:17.309989 (kubelet)[1990]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 13 00:17:17.352400 kubelet[1990]: E0913 00:17:17.352348 1990 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 13 00:17:17.355426 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 13 00:17:17.355713 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 13 00:17:17.868863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2081179770.mount: Deactivated successfully. Sep 13 00:17:18.240246 containerd[1537]: time="2025-09-13T00:17:18.240107198Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:18.240837 containerd[1537]: time="2025-09-13T00:17:18.240788591Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.13: active requests=0, bytes read=26954909" Sep 13 00:17:18.241632 containerd[1537]: time="2025-09-13T00:17:18.241590119Z" level=info msg="ImageCreate event name:\"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:18.243580 containerd[1537]: time="2025-09-13T00:17:18.243549742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:18.244301 containerd[1537]: time="2025-09-13T00:17:18.244262998Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.13\" with image id \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\", repo tag \"registry.k8s.io/kube-proxy:v1.31.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:a39637326e88d128d38da6ff2b2ceb4e856475887bfcb5f7a55734d4f63d9fae\", size \"26953926\" in 1.380633301s" Sep 13 00:17:18.244332 containerd[1537]: time="2025-09-13T00:17:18.244300657Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.13\" returns image reference \"sha256:c15699f0b7002450249485b10f20211982dfd2bec4d61c86c35acebc659e794e\"" Sep 13 00:17:18.244734 containerd[1537]: time="2025-09-13T00:17:18.244707718Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Sep 13 00:17:18.734907 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3905405768.mount: Deactivated successfully. Sep 13 00:17:19.490576 containerd[1537]: time="2025-09-13T00:17:19.490519748Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:19.491204 containerd[1537]: time="2025-09-13T00:17:19.491162143Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Sep 13 00:17:19.492174 containerd[1537]: time="2025-09-13T00:17:19.492118100Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:19.495309 containerd[1537]: time="2025-09-13T00:17:19.495252236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:19.496637 containerd[1537]: time="2025-09-13T00:17:19.496600075Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.251858335s" Sep 13 00:17:19.496701 containerd[1537]: time="2025-09-13T00:17:19.496637896Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Sep 13 00:17:19.497073 containerd[1537]: time="2025-09-13T00:17:19.497048129Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 13 00:17:19.934694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1463045017.mount: Deactivated successfully. Sep 13 00:17:19.941128 containerd[1537]: time="2025-09-13T00:17:19.941079323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:19.941546 containerd[1537]: time="2025-09-13T00:17:19.941514583Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 13 00:17:19.942361 containerd[1537]: time="2025-09-13T00:17:19.942324374Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:19.944943 containerd[1537]: time="2025-09-13T00:17:19.944904830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:19.945748 containerd[1537]: time="2025-09-13T00:17:19.945707424Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 448.625393ms" Sep 13 00:17:19.945788 containerd[1537]: time="2025-09-13T00:17:19.945745885Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 13 00:17:19.946152 containerd[1537]: time="2025-09-13T00:17:19.946121215Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Sep 13 00:17:20.433362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount611655682.mount: Deactivated successfully. Sep 13 00:17:21.984119 containerd[1537]: time="2025-09-13T00:17:21.982961659Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:21.984119 containerd[1537]: time="2025-09-13T00:17:21.983637319Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66537163" Sep 13 00:17:21.984733 containerd[1537]: time="2025-09-13T00:17:21.984705405Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:21.988049 containerd[1537]: time="2025-09-13T00:17:21.988005339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:21.989533 containerd[1537]: time="2025-09-13T00:17:21.989381328Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.043231966s" Sep 13 00:17:21.989533 containerd[1537]: time="2025-09-13T00:17:21.989416672Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Sep 13 00:17:26.740352 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:17:26.754816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:17:26.780830 systemd[1]: Reloading requested from client PID 2150 ('systemctl') (unit session-7.scope)... Sep 13 00:17:26.780847 systemd[1]: Reloading... Sep 13 00:17:26.842999 zram_generator::config[2189]: No configuration found. Sep 13 00:17:26.968709 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:17:27.021396 systemd[1]: Reloading finished in 240 ms. Sep 13 00:17:27.053662 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 13 00:17:27.053723 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 13 00:17:27.053964 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:17:27.055593 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:17:27.148579 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:17:27.152599 (kubelet)[2246]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:17:27.184133 kubelet[2246]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:17:27.184133 kubelet[2246]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:17:27.184133 kubelet[2246]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:17:27.184460 kubelet[2246]: I0913 00:17:27.184185 2246 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:17:28.656673 kubelet[2246]: I0913 00:17:28.656624 2246 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:17:28.656673 kubelet[2246]: I0913 00:17:28.656657 2246 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:17:28.657057 kubelet[2246]: I0913 00:17:28.656922 2246 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:17:28.682095 kubelet[2246]: E0913 00:17:28.682037 2246 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.134:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:17:28.682778 kubelet[2246]: I0913 00:17:28.682744 2246 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:17:28.688783 kubelet[2246]: E0913 00:17:28.688747 2246 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:17:28.688783 kubelet[2246]: I0913 00:17:28.688783 2246 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:17:28.692147 kubelet[2246]: I0913 00:17:28.692118 2246 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:17:28.693185 kubelet[2246]: I0913 00:17:28.693156 2246 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:17:28.693338 kubelet[2246]: I0913 00:17:28.693299 2246 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:17:28.693505 kubelet[2246]: I0913 00:17:28.693330 2246 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:17:28.693599 kubelet[2246]: I0913 00:17:28.693585 2246 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:17:28.693599 kubelet[2246]: I0913 00:17:28.693598 2246 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:17:28.693855 kubelet[2246]: I0913 00:17:28.693833 2246 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:17:28.696358 kubelet[2246]: I0913 00:17:28.695720 2246 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:17:28.696358 kubelet[2246]: I0913 00:17:28.695754 2246 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:17:28.696358 kubelet[2246]: I0913 00:17:28.695774 2246 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:17:28.696358 kubelet[2246]: I0913 00:17:28.695851 2246 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:17:28.699837 kubelet[2246]: I0913 00:17:28.699816 2246 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:17:28.699934 kubelet[2246]: W0913 00:17:28.699776 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 13 00:17:28.699985 kubelet[2246]: E0913 00:17:28.699948 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:17:28.699985 kubelet[2246]: W0913 00:17:28.699804 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 13 00:17:28.699985 kubelet[2246]: E0913 00:17:28.699974 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.134:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:17:28.701598 kubelet[2246]: I0913 00:17:28.701566 2246 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:17:28.701757 kubelet[2246]: W0913 00:17:28.701739 2246 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 13 00:17:28.702753 kubelet[2246]: I0913 00:17:28.702726 2246 server.go:1274] "Started kubelet" Sep 13 00:17:28.702965 kubelet[2246]: I0913 00:17:28.702925 2246 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:17:28.706321 kubelet[2246]: I0913 00:17:28.706297 2246 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:17:28.706384 kubelet[2246]: I0913 00:17:28.706363 2246 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:17:28.706509 kubelet[2246]: I0913 00:17:28.706445 2246 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:17:28.706717 kubelet[2246]: I0913 00:17:28.706697 2246 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:17:28.706983 kubelet[2246]: I0913 00:17:28.706951 2246 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:17:28.707242 kubelet[2246]: I0913 00:17:28.707206 2246 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:17:28.707313 kubelet[2246]: I0913 00:17:28.707298 2246 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:17:28.707352 kubelet[2246]: I0913 00:17:28.707345 2246 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:17:28.708431 kubelet[2246]: W0913 00:17:28.708385 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 13 00:17:28.708526 kubelet[2246]: E0913 00:17:28.708442 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:17:28.708766 kubelet[2246]: I0913 00:17:28.708737 2246 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:17:28.708905 kubelet[2246]: I0913 00:17:28.708883 2246 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:17:28.709777 kubelet[2246]: E0913 00:17:28.707396 2246 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.134:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.134:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1864af6fa9955b6d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-13 00:17:28.702704493 +0000 UTC m=+1.547212838,LastTimestamp:2025-09-13 00:17:28.702704493 +0000 UTC m=+1.547212838,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 13 00:17:28.710063 kubelet[2246]: E0913 00:17:28.710018 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:17:28.710163 kubelet[2246]: E0913 00:17:28.710038 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="200ms" Sep 13 00:17:28.712527 kubelet[2246]: E0913 00:17:28.710435 2246 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:17:28.712527 kubelet[2246]: I0913 00:17:28.710581 2246 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:17:28.718517 kubelet[2246]: I0913 00:17:28.718455 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:17:28.719360 kubelet[2246]: I0913 00:17:28.719331 2246 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:17:28.719360 kubelet[2246]: I0913 00:17:28.719350 2246 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:17:28.719435 kubelet[2246]: I0913 00:17:28.719366 2246 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:17:28.719435 kubelet[2246]: E0913 00:17:28.719406 2246 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:17:28.724857 kubelet[2246]: W0913 00:17:28.724814 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 13 00:17:28.724928 kubelet[2246]: E0913 00:17:28.724865 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:17:28.728378 kubelet[2246]: I0913 00:17:28.728350 2246 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:17:28.728378 kubelet[2246]: I0913 00:17:28.728369 2246 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:17:28.728378 kubelet[2246]: I0913 00:17:28.728388 2246 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:17:28.799535 kubelet[2246]: I0913 00:17:28.799477 2246 policy_none.go:49] "None policy: Start" Sep 13 00:17:28.800235 kubelet[2246]: I0913 00:17:28.800214 2246 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:17:28.800293 kubelet[2246]: I0913 00:17:28.800244 2246 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:17:28.804911 kubelet[2246]: I0913 00:17:28.804881 2246 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:17:28.805465 kubelet[2246]: I0913 00:17:28.805064 2246 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:17:28.805465 kubelet[2246]: I0913 00:17:28.805081 2246 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:17:28.805465 kubelet[2246]: I0913 00:17:28.805409 2246 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:17:28.806657 kubelet[2246]: E0913 00:17:28.806636 2246 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 13 00:17:28.907114 kubelet[2246]: I0913 00:17:28.907006 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:17:28.907554 kubelet[2246]: E0913 00:17:28.907526 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Sep 13 00:17:28.908815 kubelet[2246]: I0913 00:17:28.908788 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:28.908857 kubelet[2246]: I0913 00:17:28.908826 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:28.908857 kubelet[2246]: I0913 00:17:28.908846 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:28.908914 kubelet[2246]: I0913 00:17:28.908864 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc3a0990fb32f8ff1551d479482c2b17-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dc3a0990fb32f8ff1551d479482c2b17\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:28.908914 kubelet[2246]: I0913 00:17:28.908879 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc3a0990fb32f8ff1551d479482c2b17-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dc3a0990fb32f8ff1551d479482c2b17\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:28.908914 kubelet[2246]: I0913 00:17:28.908892 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc3a0990fb32f8ff1551d479482c2b17-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dc3a0990fb32f8ff1551d479482c2b17\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:28.908914 kubelet[2246]: I0913 00:17:28.908908 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:17:28.908996 kubelet[2246]: I0913 00:17:28.908922 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:28.908996 kubelet[2246]: I0913 00:17:28.908937 2246 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:28.911119 kubelet[2246]: E0913 00:17:28.911079 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="400ms" Sep 13 00:17:29.108710 kubelet[2246]: I0913 00:17:29.108659 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:17:29.108964 kubelet[2246]: E0913 00:17:29.108943 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Sep 13 00:17:29.125269 kubelet[2246]: E0913 00:17:29.125237 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:29.125785 containerd[1537]: time="2025-09-13T00:17:29.125750131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dc3a0990fb32f8ff1551d479482c2b17,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:29.128130 kubelet[2246]: E0913 00:17:29.127947 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:29.128271 containerd[1537]: time="2025-09-13T00:17:29.128241831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:29.129653 kubelet[2246]: E0913 00:17:29.129470 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:29.129768 containerd[1537]: time="2025-09-13T00:17:29.129741754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:29.311990 kubelet[2246]: E0913 00:17:29.311880 2246 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.134:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.134:6443: connect: connection refused" interval="800ms" Sep 13 00:17:29.510817 kubelet[2246]: I0913 00:17:29.510784 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:17:29.511153 kubelet[2246]: E0913 00:17:29.511111 2246 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.134:6443/api/v1/nodes\": dial tcp 10.0.0.134:6443: connect: connection refused" node="localhost" Sep 13 00:17:29.579026 kubelet[2246]: W0913 00:17:29.578870 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 13 00:17:29.579026 kubelet[2246]: E0913 00:17:29.578940 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.134:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:17:29.659387 kubelet[2246]: W0913 00:17:29.659264 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 13 00:17:29.659387 kubelet[2246]: E0913 00:17:29.659328 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.134:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:17:29.665489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount999354201.mount: Deactivated successfully. Sep 13 00:17:29.671333 containerd[1537]: time="2025-09-13T00:17:29.670555684Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:17:29.671333 containerd[1537]: time="2025-09-13T00:17:29.671298727Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Sep 13 00:17:29.673356 containerd[1537]: time="2025-09-13T00:17:29.673321351Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:17:29.675879 containerd[1537]: time="2025-09-13T00:17:29.675843922Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:17:29.676658 containerd[1537]: time="2025-09-13T00:17:29.676613878Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:17:29.677515 containerd[1537]: time="2025-09-13T00:17:29.677337886Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:17:29.679061 containerd[1537]: time="2025-09-13T00:17:29.678977052Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Sep 13 00:17:29.680091 containerd[1537]: time="2025-09-13T00:17:29.679743249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 13 00:17:29.680670 containerd[1537]: time="2025-09-13T00:17:29.680636972Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.805062ms" Sep 13 00:17:29.684991 containerd[1537]: time="2025-09-13T00:17:29.684956348Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.656172ms" Sep 13 00:17:29.686612 containerd[1537]: time="2025-09-13T00:17:29.686575998Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.752547ms" Sep 13 00:17:29.707416 kubelet[2246]: W0913 00:17:29.707272 2246 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.134:6443: connect: connection refused Sep 13 00:17:29.707416 kubelet[2246]: E0913 00:17:29.707339 2246 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.134:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.134:6443: connect: connection refused" logger="UnhandledError" Sep 13 00:17:29.770776 containerd[1537]: time="2025-09-13T00:17:29.770689389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:29.770776 containerd[1537]: time="2025-09-13T00:17:29.770745974Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:29.770776 containerd[1537]: time="2025-09-13T00:17:29.770767129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:29.771147 containerd[1537]: time="2025-09-13T00:17:29.771108318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:29.774820 containerd[1537]: time="2025-09-13T00:17:29.773379716Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:29.774820 containerd[1537]: time="2025-09-13T00:17:29.773433422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:29.774820 containerd[1537]: time="2025-09-13T00:17:29.773457016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:29.774820 containerd[1537]: time="2025-09-13T00:17:29.773546272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:29.774820 containerd[1537]: time="2025-09-13T00:17:29.774624307Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:29.774820 containerd[1537]: time="2025-09-13T00:17:29.774675173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:29.774820 containerd[1537]: time="2025-09-13T00:17:29.774695008Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:29.774820 containerd[1537]: time="2025-09-13T00:17:29.774776746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:29.825057 containerd[1537]: time="2025-09-13T00:17:29.825013274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:fe5e332fba00ba0b5b33a25fe2e8fd7b,Namespace:kube-system,Attempt:0,} returns sandbox id \"f6a8d87a14696f27430bbbd165f9f73fe215cf64d1b88eb16dbdb7bd41274427\"" Sep 13 00:17:29.825292 containerd[1537]: time="2025-09-13T00:17:29.825166753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:dc3a0990fb32f8ff1551d479482c2b17,Namespace:kube-system,Attempt:0,} returns sandbox id \"a07aa85ff81c1c0654e141aaa9283010224fa17e0683d90c374242b5d7689869\"" Sep 13 00:17:29.828081 kubelet[2246]: E0913 00:17:29.828054 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:29.828331 kubelet[2246]: E0913 00:17:29.828199 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:29.830525 containerd[1537]: time="2025-09-13T00:17:29.830410364Z" level=info msg="CreateContainer within sandbox \"f6a8d87a14696f27430bbbd165f9f73fe215cf64d1b88eb16dbdb7bd41274427\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 13 00:17:29.831650 containerd[1537]: time="2025-09-13T00:17:29.831612725Z" level=info msg="CreateContainer within sandbox \"a07aa85ff81c1c0654e141aaa9283010224fa17e0683d90c374242b5d7689869\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 13 00:17:29.833992 containerd[1537]: time="2025-09-13T00:17:29.833934030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:71d8bf7bd9b7c7432927bee9d50592b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"626617939835317d8e0cb9d1a9592d426c85dfcfdcc3dedebae85875a25e9a9b\"" Sep 13 00:17:29.836920 kubelet[2246]: E0913 00:17:29.836900 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:29.839432 containerd[1537]: time="2025-09-13T00:17:29.839296609Z" level=info msg="CreateContainer within sandbox \"626617939835317d8e0cb9d1a9592d426c85dfcfdcc3dedebae85875a25e9a9b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 13 00:17:29.850142 containerd[1537]: time="2025-09-13T00:17:29.849983417Z" level=info msg="CreateContainer within sandbox \"f6a8d87a14696f27430bbbd165f9f73fe215cf64d1b88eb16dbdb7bd41274427\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ee0d128452d21171b6314594dfe91d641bfe2e579c62ed20ded1c398e17deecc\"" Sep 13 00:17:29.850656 containerd[1537]: time="2025-09-13T00:17:29.850626447Z" level=info msg="StartContainer for \"ee0d128452d21171b6314594dfe91d641bfe2e579c62ed20ded1c398e17deecc\"" Sep 13 00:17:29.853930 containerd[1537]: time="2025-09-13T00:17:29.853889502Z" level=info msg="CreateContainer within sandbox \"a07aa85ff81c1c0654e141aaa9283010224fa17e0683d90c374242b5d7689869\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"de3a41ae0e3f298ce60d0d8ad86a39e3fbe617a069c32d90e34d74dcbec39d8e\"" Sep 13 00:17:29.854570 containerd[1537]: time="2025-09-13T00:17:29.854517856Z" level=info msg="StartContainer for \"de3a41ae0e3f298ce60d0d8ad86a39e3fbe617a069c32d90e34d74dcbec39d8e\"" Sep 13 00:17:29.855997 containerd[1537]: time="2025-09-13T00:17:29.855665831Z" level=info msg="CreateContainer within sandbox \"626617939835317d8e0cb9d1a9592d426c85dfcfdcc3dedebae85875a25e9a9b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"71b0d0b9de31543dbf4435e5565e8511f53565d8a1ba061fc8f6e4ffc8009e10\"" Sep 13 00:17:29.856162 containerd[1537]: time="2025-09-13T00:17:29.856117832Z" level=info msg="StartContainer for \"71b0d0b9de31543dbf4435e5565e8511f53565d8a1ba061fc8f6e4ffc8009e10\"" Sep 13 00:17:29.912909 containerd[1537]: time="2025-09-13T00:17:29.912857916Z" level=info msg="StartContainer for \"ee0d128452d21171b6314594dfe91d641bfe2e579c62ed20ded1c398e17deecc\" returns successfully" Sep 13 00:17:29.922584 containerd[1537]: time="2025-09-13T00:17:29.919493238Z" level=info msg="StartContainer for \"71b0d0b9de31543dbf4435e5565e8511f53565d8a1ba061fc8f6e4ffc8009e10\" returns successfully" Sep 13 00:17:29.922879 containerd[1537]: time="2025-09-13T00:17:29.921376259Z" level=info msg="StartContainer for \"de3a41ae0e3f298ce60d0d8ad86a39e3fbe617a069c32d90e34d74dcbec39d8e\" returns successfully" Sep 13 00:17:30.313892 kubelet[2246]: I0913 00:17:30.313236 2246 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:17:30.737050 kubelet[2246]: E0913 00:17:30.736952 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:30.739537 kubelet[2246]: E0913 00:17:30.737965 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:30.740711 kubelet[2246]: E0913 00:17:30.740693 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:31.457492 kubelet[2246]: E0913 00:17:31.457444 2246 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 13 00:17:31.560628 kubelet[2246]: I0913 00:17:31.560446 2246 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:17:31.560628 kubelet[2246]: E0913 00:17:31.560481 2246 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 13 00:17:31.571159 kubelet[2246]: E0913 00:17:31.571081 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:17:31.671887 kubelet[2246]: E0913 00:17:31.671828 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:17:31.742691 kubelet[2246]: E0913 00:17:31.742583 2246 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:31.772108 kubelet[2246]: E0913 00:17:31.772065 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:17:31.872227 kubelet[2246]: E0913 00:17:31.872179 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:17:31.972756 kubelet[2246]: E0913 00:17:31.972716 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:17:32.073385 kubelet[2246]: E0913 00:17:32.073268 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:17:32.173798 kubelet[2246]: E0913 00:17:32.173755 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:17:32.273944 kubelet[2246]: E0913 00:17:32.273901 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:17:32.374660 kubelet[2246]: E0913 00:17:32.374553 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:17:32.475059 kubelet[2246]: E0913 00:17:32.475025 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:17:32.575538 kubelet[2246]: E0913 00:17:32.575350 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:17:32.676195 kubelet[2246]: E0913 00:17:32.676052 2246 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:17:33.589917 systemd[1]: Reloading requested from client PID 2523 ('systemctl') (unit session-7.scope)... Sep 13 00:17:33.589933 systemd[1]: Reloading... Sep 13 00:17:33.639527 zram_generator::config[2562]: No configuration found. Sep 13 00:17:33.697489 kubelet[2246]: I0913 00:17:33.697455 2246 apiserver.go:52] "Watching apiserver" Sep 13 00:17:33.708443 kubelet[2246]: I0913 00:17:33.708404 2246 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:17:33.732856 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 13 00:17:33.790645 systemd[1]: Reloading finished in 200 ms. Sep 13 00:17:33.818717 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:17:33.832002 systemd[1]: kubelet.service: Deactivated successfully. Sep 13 00:17:33.832372 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:17:33.843788 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 13 00:17:33.936976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 13 00:17:33.941062 (kubelet)[2614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 13 00:17:33.974319 kubelet[2614]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:17:33.974319 kubelet[2614]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Sep 13 00:17:33.974319 kubelet[2614]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 13 00:17:33.974743 kubelet[2614]: I0913 00:17:33.974363 2614 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 13 00:17:33.982038 kubelet[2614]: I0913 00:17:33.982009 2614 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Sep 13 00:17:33.982038 kubelet[2614]: I0913 00:17:33.982036 2614 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 13 00:17:33.982282 kubelet[2614]: I0913 00:17:33.982267 2614 server.go:934] "Client rotation is on, will bootstrap in background" Sep 13 00:17:33.983654 kubelet[2614]: I0913 00:17:33.983637 2614 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Sep 13 00:17:33.986017 kubelet[2614]: I0913 00:17:33.985945 2614 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 13 00:17:33.991088 kubelet[2614]: E0913 00:17:33.991063 2614 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Sep 13 00:17:33.991088 kubelet[2614]: I0913 00:17:33.991088 2614 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Sep 13 00:17:33.995697 kubelet[2614]: I0913 00:17:33.995338 2614 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 13 00:17:33.995859 kubelet[2614]: I0913 00:17:33.995836 2614 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Sep 13 00:17:33.995979 kubelet[2614]: I0913 00:17:33.995956 2614 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 13 00:17:33.996200 kubelet[2614]: I0913 00:17:33.995980 2614 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Sep 13 00:17:33.996283 kubelet[2614]: I0913 00:17:33.996208 2614 topology_manager.go:138] "Creating topology manager with none policy" Sep 13 00:17:33.996283 kubelet[2614]: I0913 00:17:33.996217 2614 container_manager_linux.go:300] "Creating device plugin manager" Sep 13 00:17:33.996283 kubelet[2614]: I0913 00:17:33.996264 2614 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:17:33.996365 kubelet[2614]: I0913 00:17:33.996352 2614 kubelet.go:408] "Attempting to sync node with API server" Sep 13 00:17:33.996365 kubelet[2614]: I0913 00:17:33.996366 2614 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 13 00:17:33.996426 kubelet[2614]: I0913 00:17:33.996388 2614 kubelet.go:314] "Adding apiserver pod source" Sep 13 00:17:33.996426 kubelet[2614]: I0913 00:17:33.996401 2614 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 13 00:17:34.003842 kubelet[2614]: I0913 00:17:34.002112 2614 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Sep 13 00:17:34.003842 kubelet[2614]: I0913 00:17:34.002598 2614 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Sep 13 00:17:34.003842 kubelet[2614]: I0913 00:17:34.002950 2614 server.go:1274] "Started kubelet" Sep 13 00:17:34.004520 kubelet[2614]: I0913 00:17:34.004481 2614 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Sep 13 00:17:34.007607 kubelet[2614]: I0913 00:17:34.006668 2614 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 13 00:17:34.007833 kubelet[2614]: I0913 00:17:34.007814 2614 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 13 00:17:34.009220 kubelet[2614]: I0913 00:17:34.009182 2614 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 13 00:17:34.012100 kubelet[2614]: I0913 00:17:34.012077 2614 server.go:449] "Adding debug handlers to kubelet server" Sep 13 00:17:34.012437 kubelet[2614]: I0913 00:17:34.012412 2614 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 13 00:17:34.013345 kubelet[2614]: I0913 00:17:34.013323 2614 volume_manager.go:289] "Starting Kubelet Volume Manager" Sep 13 00:17:34.013537 kubelet[2614]: E0913 00:17:34.013518 2614 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 13 00:17:34.014371 kubelet[2614]: I0913 00:17:34.014347 2614 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Sep 13 00:17:34.015644 kubelet[2614]: I0913 00:17:34.015352 2614 reconciler.go:26] "Reconciler: start to sync state" Sep 13 00:17:34.015703 kubelet[2614]: I0913 00:17:34.015667 2614 factory.go:221] Registration of the systemd container factory successfully Sep 13 00:17:34.017518 kubelet[2614]: I0913 00:17:34.015786 2614 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 13 00:17:34.018462 kubelet[2614]: E0913 00:17:34.018146 2614 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 13 00:17:34.019100 kubelet[2614]: I0913 00:17:34.018850 2614 factory.go:221] Registration of the containerd container factory successfully Sep 13 00:17:34.024636 kubelet[2614]: I0913 00:17:34.024600 2614 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Sep 13 00:17:34.025931 kubelet[2614]: I0913 00:17:34.025908 2614 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Sep 13 00:17:34.025987 kubelet[2614]: I0913 00:17:34.025933 2614 status_manager.go:217] "Starting to sync pod status with apiserver" Sep 13 00:17:34.025987 kubelet[2614]: I0913 00:17:34.025956 2614 kubelet.go:2321] "Starting kubelet main sync loop" Sep 13 00:17:34.026043 kubelet[2614]: E0913 00:17:34.026009 2614 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 13 00:17:34.056570 kubelet[2614]: I0913 00:17:34.056543 2614 cpu_manager.go:214] "Starting CPU manager" policy="none" Sep 13 00:17:34.056931 kubelet[2614]: I0913 00:17:34.056694 2614 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Sep 13 00:17:34.056931 kubelet[2614]: I0913 00:17:34.056715 2614 state_mem.go:36] "Initialized new in-memory state store" Sep 13 00:17:34.056931 kubelet[2614]: I0913 00:17:34.056862 2614 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 13 00:17:34.056931 kubelet[2614]: I0913 00:17:34.056872 2614 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 13 00:17:34.056931 kubelet[2614]: I0913 00:17:34.056891 2614 policy_none.go:49] "None policy: Start" Sep 13 00:17:34.057678 kubelet[2614]: I0913 00:17:34.057662 2614 memory_manager.go:170] "Starting memorymanager" policy="None" Sep 13 00:17:34.057753 kubelet[2614]: I0913 00:17:34.057743 2614 state_mem.go:35] "Initializing new in-memory state store" Sep 13 00:17:34.057958 kubelet[2614]: I0913 00:17:34.057943 2614 state_mem.go:75] "Updated machine memory state" Sep 13 00:17:34.059062 kubelet[2614]: I0913 00:17:34.059038 2614 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Sep 13 00:17:34.059316 kubelet[2614]: I0913 00:17:34.059297 2614 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 13 00:17:34.059402 kubelet[2614]: I0913 00:17:34.059375 2614 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 13 00:17:34.059797 kubelet[2614]: I0913 00:17:34.059646 2614 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 13 00:17:34.163581 kubelet[2614]: I0913 00:17:34.163431 2614 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Sep 13 00:17:34.169469 kubelet[2614]: I0913 00:17:34.169421 2614 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Sep 13 00:17:34.169562 kubelet[2614]: I0913 00:17:34.169485 2614 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Sep 13 00:17:34.216987 kubelet[2614]: I0913 00:17:34.216929 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/dc3a0990fb32f8ff1551d479482c2b17-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"dc3a0990fb32f8ff1551d479482c2b17\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:34.216987 kubelet[2614]: I0913 00:17:34.216977 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:34.216987 kubelet[2614]: I0913 00:17:34.216999 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:34.217172 kubelet[2614]: I0913 00:17:34.217014 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:34.217172 kubelet[2614]: I0913 00:17:34.217031 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/dc3a0990fb32f8ff1551d479482c2b17-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"dc3a0990fb32f8ff1551d479482c2b17\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:34.217172 kubelet[2614]: I0913 00:17:34.217045 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/dc3a0990fb32f8ff1551d479482c2b17-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"dc3a0990fb32f8ff1551d479482c2b17\") " pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:34.217172 kubelet[2614]: I0913 00:17:34.217059 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:34.217172 kubelet[2614]: I0913 00:17:34.217072 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/71d8bf7bd9b7c7432927bee9d50592b5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"71d8bf7bd9b7c7432927bee9d50592b5\") " pod="kube-system/kube-controller-manager-localhost" Sep 13 00:17:34.217288 kubelet[2614]: I0913 00:17:34.217086 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe5e332fba00ba0b5b33a25fe2e8fd7b-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"fe5e332fba00ba0b5b33a25fe2e8fd7b\") " pod="kube-system/kube-scheduler-localhost" Sep 13 00:17:34.436491 kubelet[2614]: E0913 00:17:34.436371 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:34.436705 kubelet[2614]: E0913 00:17:34.436674 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:34.439932 kubelet[2614]: E0913 00:17:34.439900 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:34.588129 sudo[2649]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 13 00:17:34.588776 sudo[2649]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 13 00:17:34.997997 kubelet[2614]: I0913 00:17:34.997406 2614 apiserver.go:52] "Watching apiserver" Sep 13 00:17:35.014912 kubelet[2614]: I0913 00:17:35.014866 2614 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Sep 13 00:17:35.039896 kubelet[2614]: E0913 00:17:35.039426 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:35.041785 kubelet[2614]: E0913 00:17:35.040812 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:35.042759 sudo[2649]: pam_unix(sudo:session): session closed for user root Sep 13 00:17:35.069576 kubelet[2614]: E0913 00:17:35.068975 2614 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 13 00:17:35.069576 kubelet[2614]: E0913 00:17:35.069148 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:35.082435 kubelet[2614]: I0913 00:17:35.082202 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.082184882 podStartE2EDuration="1.082184882s" podCreationTimestamp="2025-09-13 00:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:35.068818726 +0000 UTC m=+1.124768665" watchObservedRunningTime="2025-09-13 00:17:35.082184882 +0000 UTC m=+1.138134781" Sep 13 00:17:35.082435 kubelet[2614]: I0913 00:17:35.082467 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.082457273 podStartE2EDuration="1.082457273s" podCreationTimestamp="2025-09-13 00:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:35.082331056 +0000 UTC m=+1.138280995" watchObservedRunningTime="2025-09-13 00:17:35.082457273 +0000 UTC m=+1.138407212" Sep 13 00:17:35.113024 kubelet[2614]: I0913 00:17:35.112805 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.112787337 podStartE2EDuration="1.112787337s" podCreationTimestamp="2025-09-13 00:17:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:35.103035891 +0000 UTC m=+1.158985830" watchObservedRunningTime="2025-09-13 00:17:35.112787337 +0000 UTC m=+1.168737276" Sep 13 00:17:36.041759 kubelet[2614]: E0913 00:17:36.040858 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:36.669691 sudo[1745]: pam_unix(sudo:session): session closed for user root Sep 13 00:17:36.671250 sshd[1738]: pam_unix(sshd:session): session closed for user core Sep 13 00:17:36.674519 systemd[1]: sshd@6-10.0.0.134:22-10.0.0.1:54900.service: Deactivated successfully. Sep 13 00:17:36.676682 systemd-logind[1524]: Session 7 logged out. Waiting for processes to exit. Sep 13 00:17:36.677143 systemd[1]: session-7.scope: Deactivated successfully. Sep 13 00:17:36.678067 systemd-logind[1524]: Removed session 7. Sep 13 00:17:38.421680 kubelet[2614]: E0913 00:17:38.420320 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:38.793356 kubelet[2614]: I0913 00:17:38.793253 2614 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 13 00:17:38.793660 containerd[1537]: time="2025-09-13T00:17:38.793626539Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 13 00:17:38.793952 kubelet[2614]: I0913 00:17:38.793895 2614 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 13 00:17:39.248427 kubelet[2614]: I0913 00:17:39.248303 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-bpf-maps\") pod \"cilium-9jh8g\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " pod="kube-system/cilium-9jh8g" Sep 13 00:17:39.248427 kubelet[2614]: I0913 00:17:39.248351 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-lib-modules\") pod \"cilium-9jh8g\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " pod="kube-system/cilium-9jh8g" Sep 13 00:17:39.248427 kubelet[2614]: I0913 00:17:39.248370 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-host-proc-sys-kernel\") pod \"cilium-9jh8g\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " pod="kube-system/cilium-9jh8g" Sep 13 00:17:39.248427 kubelet[2614]: I0913 00:17:39.248394 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-etc-cni-netd\") pod \"cilium-9jh8g\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " pod="kube-system/cilium-9jh8g" Sep 13 00:17:39.248427 kubelet[2614]: I0913 00:17:39.248409 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/905074f0-ef7c-4402-8c5f-8f7737a5b78a-clustermesh-secrets\") pod \"cilium-9jh8g\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " pod="kube-system/cilium-9jh8g" Sep 13 00:17:39.248427 kubelet[2614]: I0913 00:17:39.248424 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cd86853-c03f-4487-806a-6f8402a89a18-xtables-lock\") pod \"kube-proxy-b2xq9\" (UID: \"0cd86853-c03f-4487-806a-6f8402a89a18\") " pod="kube-system/kube-proxy-b2xq9" Sep 13 00:17:39.248653 kubelet[2614]: I0913 00:17:39.248437 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cilium-run\") pod \"cilium-9jh8g\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " pod="kube-system/cilium-9jh8g" Sep 13 00:17:39.248653 kubelet[2614]: I0913 00:17:39.248451 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-xtables-lock\") pod \"cilium-9jh8g\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " pod="kube-system/cilium-9jh8g" Sep 13 00:17:39.248653 kubelet[2614]: I0913 00:17:39.248465 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cilium-config-path\") pod \"cilium-9jh8g\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " pod="kube-system/cilium-9jh8g" Sep 13 00:17:39.248653 kubelet[2614]: I0913 00:17:39.248479 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xbzgv\" (UniqueName: \"kubernetes.io/projected/905074f0-ef7c-4402-8c5f-8f7737a5b78a-kube-api-access-xbzgv\") pod \"cilium-9jh8g\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " pod="kube-system/cilium-9jh8g" Sep 13 00:17:39.248653 kubelet[2614]: I0913 00:17:39.248512 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-host-proc-sys-net\") pod \"cilium-9jh8g\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " pod="kube-system/cilium-9jh8g" Sep 13 00:17:39.248757 kubelet[2614]: I0913 00:17:39.248528 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0cd86853-c03f-4487-806a-6f8402a89a18-kube-proxy\") pod \"kube-proxy-b2xq9\" (UID: \"0cd86853-c03f-4487-806a-6f8402a89a18\") " pod="kube-system/kube-proxy-b2xq9" Sep 13 00:17:39.248757 kubelet[2614]: I0913 00:17:39.248545 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0cd86853-c03f-4487-806a-6f8402a89a18-lib-modules\") pod \"kube-proxy-b2xq9\" (UID: \"0cd86853-c03f-4487-806a-6f8402a89a18\") " pod="kube-system/kube-proxy-b2xq9" Sep 13 00:17:39.248757 kubelet[2614]: I0913 00:17:39.248559 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cilium-cgroup\") pod \"cilium-9jh8g\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " pod="kube-system/cilium-9jh8g" Sep 13 00:17:39.248757 kubelet[2614]: I0913 00:17:39.248580 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xzd5\" (UniqueName: \"kubernetes.io/projected/0cd86853-c03f-4487-806a-6f8402a89a18-kube-api-access-9xzd5\") pod \"kube-proxy-b2xq9\" (UID: \"0cd86853-c03f-4487-806a-6f8402a89a18\") " pod="kube-system/kube-proxy-b2xq9" Sep 13 00:17:39.248757 kubelet[2614]: I0913 00:17:39.248597 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cni-path\") pod \"cilium-9jh8g\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " pod="kube-system/cilium-9jh8g" Sep 13 00:17:39.248757 kubelet[2614]: I0913 00:17:39.248613 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/905074f0-ef7c-4402-8c5f-8f7737a5b78a-hubble-tls\") pod \"cilium-9jh8g\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " pod="kube-system/cilium-9jh8g" Sep 13 00:17:39.248884 kubelet[2614]: I0913 00:17:39.248628 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-hostproc\") pod \"cilium-9jh8g\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " pod="kube-system/cilium-9jh8g" Sep 13 00:17:39.359565 kubelet[2614]: E0913 00:17:39.359532 2614 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:17:39.359918 kubelet[2614]: E0913 00:17:39.359701 2614 projected.go:194] Error preparing data for projected volume kube-api-access-9xzd5 for pod kube-system/kube-proxy-b2xq9: configmap "kube-root-ca.crt" not found Sep 13 00:17:39.359918 kubelet[2614]: E0913 00:17:39.359759 2614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/0cd86853-c03f-4487-806a-6f8402a89a18-kube-api-access-9xzd5 podName:0cd86853-c03f-4487-806a-6f8402a89a18 nodeName:}" failed. No retries permitted until 2025-09-13 00:17:39.859739996 +0000 UTC m=+5.915689935 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9xzd5" (UniqueName: "kubernetes.io/projected/0cd86853-c03f-4487-806a-6f8402a89a18-kube-api-access-9xzd5") pod "kube-proxy-b2xq9" (UID: "0cd86853-c03f-4487-806a-6f8402a89a18") : configmap "kube-root-ca.crt" not found Sep 13 00:17:39.366399 kubelet[2614]: E0913 00:17:39.366349 2614 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Sep 13 00:17:39.366399 kubelet[2614]: E0913 00:17:39.366379 2614 projected.go:194] Error preparing data for projected volume kube-api-access-xbzgv for pod kube-system/cilium-9jh8g: configmap "kube-root-ca.crt" not found Sep 13 00:17:39.366545 kubelet[2614]: E0913 00:17:39.366419 2614 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/905074f0-ef7c-4402-8c5f-8f7737a5b78a-kube-api-access-xbzgv podName:905074f0-ef7c-4402-8c5f-8f7737a5b78a nodeName:}" failed. No retries permitted until 2025-09-13 00:17:39.86640539 +0000 UTC m=+5.922355329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xbzgv" (UniqueName: "kubernetes.io/projected/905074f0-ef7c-4402-8c5f-8f7737a5b78a-kube-api-access-xbzgv") pod "cilium-9jh8g" (UID: "905074f0-ef7c-4402-8c5f-8f7737a5b78a") : configmap "kube-root-ca.crt" not found Sep 13 00:17:39.853248 kubelet[2614]: I0913 00:17:39.853174 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pw8n9\" (UniqueName: \"kubernetes.io/projected/7c0c532e-c9db-414e-8a19-4107ef34595c-kube-api-access-pw8n9\") pod \"cilium-operator-5d85765b45-2vkfv\" (UID: \"7c0c532e-c9db-414e-8a19-4107ef34595c\") " pod="kube-system/cilium-operator-5d85765b45-2vkfv" Sep 13 00:17:39.853248 kubelet[2614]: I0913 00:17:39.853235 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c0c532e-c9db-414e-8a19-4107ef34595c-cilium-config-path\") pod \"cilium-operator-5d85765b45-2vkfv\" (UID: \"7c0c532e-c9db-414e-8a19-4107ef34595c\") " pod="kube-system/cilium-operator-5d85765b45-2vkfv" Sep 13 00:17:40.129532 kubelet[2614]: E0913 00:17:40.128750 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:40.130327 containerd[1537]: time="2025-09-13T00:17:40.130249974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2vkfv,Uid:7c0c532e-c9db-414e-8a19-4107ef34595c,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:40.131449 kubelet[2614]: E0913 00:17:40.130885 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:40.131779 containerd[1537]: time="2025-09-13T00:17:40.131741459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b2xq9,Uid:0cd86853-c03f-4487-806a-6f8402a89a18,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:40.135446 kubelet[2614]: E0913 00:17:40.135410 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:40.135794 containerd[1537]: time="2025-09-13T00:17:40.135767215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9jh8g,Uid:905074f0-ef7c-4402-8c5f-8f7737a5b78a,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:40.225091 containerd[1537]: time="2025-09-13T00:17:40.225014868Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:40.225264 containerd[1537]: time="2025-09-13T00:17:40.225105696Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:40.225264 containerd[1537]: time="2025-09-13T00:17:40.225134293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:40.225417 containerd[1537]: time="2025-09-13T00:17:40.225247718Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:40.228889 containerd[1537]: time="2025-09-13T00:17:40.228709707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:40.228889 containerd[1537]: time="2025-09-13T00:17:40.228844369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:40.228889 containerd[1537]: time="2025-09-13T00:17:40.228858928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:40.229063 containerd[1537]: time="2025-09-13T00:17:40.228978832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:40.231933 containerd[1537]: time="2025-09-13T00:17:40.231849578Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:40.234582 containerd[1537]: time="2025-09-13T00:17:40.232252965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:40.234582 containerd[1537]: time="2025-09-13T00:17:40.232301039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:40.234582 containerd[1537]: time="2025-09-13T00:17:40.232411545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:40.274487 containerd[1537]: time="2025-09-13T00:17:40.274436430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9jh8g,Uid:905074f0-ef7c-4402-8c5f-8f7737a5b78a,Namespace:kube-system,Attempt:0,} returns sandbox id \"490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87\"" Sep 13 00:17:40.276100 kubelet[2614]: E0913 00:17:40.276072 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:40.276584 containerd[1537]: time="2025-09-13T00:17:40.276559473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-b2xq9,Uid:0cd86853-c03f-4487-806a-6f8402a89a18,Namespace:kube-system,Attempt:0,} returns sandbox id \"685dd17c33dedc7a2fa25f3fa49858214e4ecfdc13b14020db655e7e2eb949ba\"" Sep 13 00:17:40.277982 kubelet[2614]: E0913 00:17:40.277959 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:40.279724 containerd[1537]: time="2025-09-13T00:17:40.279701464Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 13 00:17:40.291035 containerd[1537]: time="2025-09-13T00:17:40.291004952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-2vkfv,Uid:7c0c532e-c9db-414e-8a19-4107ef34595c,Namespace:kube-system,Attempt:0,} returns sandbox id \"11f6b8a835a5830434f4444f3c1b3790afa841dcf89d756e92d577370f26a708\"" Sep 13 00:17:40.293276 kubelet[2614]: E0913 00:17:40.293250 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:40.301648 containerd[1537]: time="2025-09-13T00:17:40.301610050Z" level=info msg="CreateContainer within sandbox \"685dd17c33dedc7a2fa25f3fa49858214e4ecfdc13b14020db655e7e2eb949ba\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 13 00:17:40.323315 containerd[1537]: time="2025-09-13T00:17:40.323274228Z" level=info msg="CreateContainer within sandbox \"685dd17c33dedc7a2fa25f3fa49858214e4ecfdc13b14020db655e7e2eb949ba\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2f637ec8b6d8ac7a8079bdfa4342efe31bbc1ce6cc70c33e6de4e13c4e6ff29b\"" Sep 13 00:17:40.323942 containerd[1537]: time="2025-09-13T00:17:40.323915224Z" level=info msg="StartContainer for \"2f637ec8b6d8ac7a8079bdfa4342efe31bbc1ce6cc70c33e6de4e13c4e6ff29b\"" Sep 13 00:17:40.389916 containerd[1537]: time="2025-09-13T00:17:40.389815479Z" level=info msg="StartContainer for \"2f637ec8b6d8ac7a8079bdfa4342efe31bbc1ce6cc70c33e6de4e13c4e6ff29b\" returns successfully" Sep 13 00:17:41.060827 kubelet[2614]: E0913 00:17:41.060796 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:43.486095 kubelet[2614]: E0913 00:17:43.486063 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:43.504188 kubelet[2614]: I0913 00:17:43.504121 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-b2xq9" podStartSLOduration=4.504101636 podStartE2EDuration="4.504101636s" podCreationTimestamp="2025-09-13 00:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:41.069245049 +0000 UTC m=+7.125194988" watchObservedRunningTime="2025-09-13 00:17:43.504101636 +0000 UTC m=+9.560051535" Sep 13 00:17:44.067332 kubelet[2614]: E0913 00:17:44.067303 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:44.289528 kubelet[2614]: E0913 00:17:44.288850 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:44.400067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1724111427.mount: Deactivated successfully. Sep 13 00:17:45.069000 kubelet[2614]: E0913 00:17:45.068967 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:45.675469 containerd[1537]: time="2025-09-13T00:17:45.675423118Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:45.676516 containerd[1537]: time="2025-09-13T00:17:45.676465380Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 13 00:17:45.677373 containerd[1537]: time="2025-09-13T00:17:45.677326499Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:45.679056 containerd[1537]: time="2025-09-13T00:17:45.678941346Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.399041627s" Sep 13 00:17:45.679056 containerd[1537]: time="2025-09-13T00:17:45.678975303Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 13 00:17:45.682047 containerd[1537]: time="2025-09-13T00:17:45.681997938Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 13 00:17:45.686143 containerd[1537]: time="2025-09-13T00:17:45.686095311Z" level=info msg="CreateContainer within sandbox \"490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:17:45.714069 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3896189043.mount: Deactivated successfully. Sep 13 00:17:45.716884 containerd[1537]: time="2025-09-13T00:17:45.716837811Z" level=info msg="CreateContainer within sandbox \"490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e\"" Sep 13 00:17:45.717505 containerd[1537]: time="2025-09-13T00:17:45.717326765Z" level=info msg="StartContainer for \"bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e\"" Sep 13 00:17:45.757866 containerd[1537]: time="2025-09-13T00:17:45.757831544Z" level=info msg="StartContainer for \"bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e\" returns successfully" Sep 13 00:17:45.910769 containerd[1537]: time="2025-09-13T00:17:45.899170610Z" level=info msg="shim disconnected" id=bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e namespace=k8s.io Sep 13 00:17:45.910769 containerd[1537]: time="2025-09-13T00:17:45.910758717Z" level=warning msg="cleaning up after shim disconnected" id=bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e namespace=k8s.io Sep 13 00:17:45.910769 containerd[1537]: time="2025-09-13T00:17:45.910774195Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:17:46.075230 kubelet[2614]: E0913 00:17:46.075187 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:46.077878 containerd[1537]: time="2025-09-13T00:17:46.077830368Z" level=info msg="CreateContainer within sandbox \"490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:17:46.092006 containerd[1537]: time="2025-09-13T00:17:46.091950559Z" level=info msg="CreateContainer within sandbox \"490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540\"" Sep 13 00:17:46.093278 containerd[1537]: time="2025-09-13T00:17:46.092453874Z" level=info msg="StartContainer for \"5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540\"" Sep 13 00:17:46.133877 containerd[1537]: time="2025-09-13T00:17:46.133837374Z" level=info msg="StartContainer for \"5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540\" returns successfully" Sep 13 00:17:46.144873 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 13 00:17:46.145417 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:17:46.145524 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:17:46.151837 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 13 00:17:46.168103 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 13 00:17:46.169205 containerd[1537]: time="2025-09-13T00:17:46.169142532Z" level=info msg="shim disconnected" id=5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540 namespace=k8s.io Sep 13 00:17:46.169205 containerd[1537]: time="2025-09-13T00:17:46.169195527Z" level=warning msg="cleaning up after shim disconnected" id=5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540 namespace=k8s.io Sep 13 00:17:46.169205 containerd[1537]: time="2025-09-13T00:17:46.169205486Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:17:46.711763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e-rootfs.mount: Deactivated successfully. Sep 13 00:17:46.887401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3333746046.mount: Deactivated successfully. Sep 13 00:17:47.078222 kubelet[2614]: E0913 00:17:47.078178 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:47.080573 containerd[1537]: time="2025-09-13T00:17:47.079976325Z" level=info msg="CreateContainer within sandbox \"490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:17:47.115681 containerd[1537]: time="2025-09-13T00:17:47.115636328Z" level=info msg="CreateContainer within sandbox \"490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53\"" Sep 13 00:17:47.116400 containerd[1537]: time="2025-09-13T00:17:47.116353188Z" level=info msg="StartContainer for \"b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53\"" Sep 13 00:17:47.168012 containerd[1537]: time="2025-09-13T00:17:47.167906154Z" level=info msg="StartContainer for \"b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53\" returns successfully" Sep 13 00:17:47.204688 containerd[1537]: time="2025-09-13T00:17:47.204633869Z" level=info msg="shim disconnected" id=b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53 namespace=k8s.io Sep 13 00:17:47.204688 containerd[1537]: time="2025-09-13T00:17:47.204683345Z" level=warning msg="cleaning up after shim disconnected" id=b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53 namespace=k8s.io Sep 13 00:17:47.204688 containerd[1537]: time="2025-09-13T00:17:47.204693384Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:17:47.623070 containerd[1537]: time="2025-09-13T00:17:47.623010139Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:47.623934 containerd[1537]: time="2025-09-13T00:17:47.623828311Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 13 00:17:47.630216 containerd[1537]: time="2025-09-13T00:17:47.630186504Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 13 00:17:47.632205 containerd[1537]: time="2025-09-13T00:17:47.632160620Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.950129166s" Sep 13 00:17:47.632205 containerd[1537]: time="2025-09-13T00:17:47.632197177Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 13 00:17:47.634625 containerd[1537]: time="2025-09-13T00:17:47.634594699Z" level=info msg="CreateContainer within sandbox \"11f6b8a835a5830434f4444f3c1b3790afa841dcf89d756e92d577370f26a708\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 13 00:17:47.643341 containerd[1537]: time="2025-09-13T00:17:47.643252021Z" level=info msg="CreateContainer within sandbox \"11f6b8a835a5830434f4444f3c1b3790afa841dcf89d756e92d577370f26a708\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b\"" Sep 13 00:17:47.643829 containerd[1537]: time="2025-09-13T00:17:47.643760739Z" level=info msg="StartContainer for \"4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b\"" Sep 13 00:17:47.725522 containerd[1537]: time="2025-09-13T00:17:47.725394410Z" level=info msg="StartContainer for \"4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b\" returns successfully" Sep 13 00:17:48.082247 kubelet[2614]: E0913 00:17:48.082210 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:48.089222 kubelet[2614]: E0913 00:17:48.089191 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:48.092581 containerd[1537]: time="2025-09-13T00:17:48.090284815Z" level=info msg="CreateContainer within sandbox \"490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:17:48.114523 containerd[1537]: time="2025-09-13T00:17:48.114452097Z" level=info msg="CreateContainer within sandbox \"490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23\"" Sep 13 00:17:48.115050 containerd[1537]: time="2025-09-13T00:17:48.115015533Z" level=info msg="StartContainer for \"0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23\"" Sep 13 00:17:48.214868 containerd[1537]: time="2025-09-13T00:17:48.214810855Z" level=info msg="StartContainer for \"0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23\" returns successfully" Sep 13 00:17:48.241828 containerd[1537]: time="2025-09-13T00:17:48.241766040Z" level=info msg="shim disconnected" id=0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23 namespace=k8s.io Sep 13 00:17:48.243664 containerd[1537]: time="2025-09-13T00:17:48.243533263Z" level=warning msg="cleaning up after shim disconnected" id=0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23 namespace=k8s.io Sep 13 00:17:48.243664 containerd[1537]: time="2025-09-13T00:17:48.243559981Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:17:48.430723 kubelet[2614]: E0913 00:17:48.430260 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:48.445524 kubelet[2614]: I0913 00:17:48.442943 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-2vkfv" podStartSLOduration=2.104506349 podStartE2EDuration="9.442924924s" podCreationTimestamp="2025-09-13 00:17:39 +0000 UTC" firstStartedPulling="2025-09-13 00:17:40.294360514 +0000 UTC m=+6.350310453" lastFinishedPulling="2025-09-13 00:17:47.632779089 +0000 UTC m=+13.688729028" observedRunningTime="2025-09-13 00:17:48.169538015 +0000 UTC m=+14.225488034" watchObservedRunningTime="2025-09-13 00:17:48.442924924 +0000 UTC m=+14.498874863" Sep 13 00:17:48.711444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23-rootfs.mount: Deactivated successfully. Sep 13 00:17:49.093599 kubelet[2614]: E0913 00:17:49.093559 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:49.094978 kubelet[2614]: E0913 00:17:49.094954 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:49.101529 containerd[1537]: time="2025-09-13T00:17:49.100813267Z" level=info msg="CreateContainer within sandbox \"490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:17:49.123320 containerd[1537]: time="2025-09-13T00:17:49.123258111Z" level=info msg="CreateContainer within sandbox \"490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38\"" Sep 13 00:17:49.123815 containerd[1537]: time="2025-09-13T00:17:49.123774794Z" level=info msg="StartContainer for \"555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38\"" Sep 13 00:17:49.165280 containerd[1537]: time="2025-09-13T00:17:49.165230853Z" level=info msg="StartContainer for \"555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38\" returns successfully" Sep 13 00:17:49.259609 kubelet[2614]: I0913 00:17:49.259569 2614 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Sep 13 00:17:49.467108 kubelet[2614]: I0913 00:17:49.466879 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-plvxl\" (UniqueName: \"kubernetes.io/projected/6f7b05ab-9c04-47ff-b666-abe821a60787-kube-api-access-plvxl\") pod \"coredns-7c65d6cfc9-j2w8s\" (UID: \"6f7b05ab-9c04-47ff-b666-abe821a60787\") " pod="kube-system/coredns-7c65d6cfc9-j2w8s" Sep 13 00:17:49.467108 kubelet[2614]: I0913 00:17:49.466931 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f7b05ab-9c04-47ff-b666-abe821a60787-config-volume\") pod \"coredns-7c65d6cfc9-j2w8s\" (UID: \"6f7b05ab-9c04-47ff-b666-abe821a60787\") " pod="kube-system/coredns-7c65d6cfc9-j2w8s" Sep 13 00:17:49.467108 kubelet[2614]: I0913 00:17:49.466953 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlbzv\" (UniqueName: \"kubernetes.io/projected/12f517b9-7267-4846-b45b-3a0eb828eed7-kube-api-access-dlbzv\") pod \"coredns-7c65d6cfc9-dplx2\" (UID: \"12f517b9-7267-4846-b45b-3a0eb828eed7\") " pod="kube-system/coredns-7c65d6cfc9-dplx2" Sep 13 00:17:49.467108 kubelet[2614]: I0913 00:17:49.466975 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/12f517b9-7267-4846-b45b-3a0eb828eed7-config-volume\") pod \"coredns-7c65d6cfc9-dplx2\" (UID: \"12f517b9-7267-4846-b45b-3a0eb828eed7\") " pod="kube-system/coredns-7c65d6cfc9-dplx2" Sep 13 00:17:49.588701 kubelet[2614]: E0913 00:17:49.588589 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:49.590880 containerd[1537]: time="2025-09-13T00:17:49.590837797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dplx2,Uid:12f517b9-7267-4846-b45b-3a0eb828eed7,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:49.591415 kubelet[2614]: E0913 00:17:49.591378 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:49.592216 containerd[1537]: time="2025-09-13T00:17:49.591721973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j2w8s,Uid:6f7b05ab-9c04-47ff-b666-abe821a60787,Namespace:kube-system,Attempt:0,}" Sep 13 00:17:50.103948 kubelet[2614]: E0913 00:17:50.103371 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:50.121932 kubelet[2614]: I0913 00:17:50.121456 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9jh8g" podStartSLOduration=5.716873548 podStartE2EDuration="11.121436481s" podCreationTimestamp="2025-09-13 00:17:39 +0000 UTC" firstStartedPulling="2025-09-13 00:17:40.277293658 +0000 UTC m=+6.333243597" lastFinishedPulling="2025-09-13 00:17:45.681856591 +0000 UTC m=+11.737806530" observedRunningTime="2025-09-13 00:17:50.119072883 +0000 UTC m=+16.175022822" watchObservedRunningTime="2025-09-13 00:17:50.121436481 +0000 UTC m=+16.177386420" Sep 13 00:17:50.155690 update_engine[1526]: I20250913 00:17:50.155596 1526 update_attempter.cc:509] Updating boot flags... Sep 13 00:17:50.172535 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3457) Sep 13 00:17:50.196525 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3460) Sep 13 00:17:50.222607 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3460) Sep 13 00:17:51.104924 kubelet[2614]: E0913 00:17:51.104837 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:51.202283 systemd-networkd[1228]: cilium_host: Link UP Sep 13 00:17:51.202401 systemd-networkd[1228]: cilium_net: Link UP Sep 13 00:17:51.202404 systemd-networkd[1228]: cilium_net: Gained carrier Sep 13 00:17:51.202540 systemd-networkd[1228]: cilium_host: Gained carrier Sep 13 00:17:51.202682 systemd-networkd[1228]: cilium_host: Gained IPv6LL Sep 13 00:17:51.276158 systemd-networkd[1228]: cilium_vxlan: Link UP Sep 13 00:17:51.276164 systemd-networkd[1228]: cilium_vxlan: Gained carrier Sep 13 00:17:51.536538 kernel: NET: Registered PF_ALG protocol family Sep 13 00:17:51.768745 systemd-networkd[1228]: cilium_net: Gained IPv6LL Sep 13 00:17:52.106641 kubelet[2614]: E0913 00:17:52.106615 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:52.185906 systemd-networkd[1228]: lxc_health: Link UP Sep 13 00:17:52.187749 systemd-networkd[1228]: lxc_health: Gained carrier Sep 13 00:17:52.651853 systemd-networkd[1228]: lxc760027e12fa0: Link UP Sep 13 00:17:52.666601 kernel: eth0: renamed from tmp68310 Sep 13 00:17:52.682881 systemd-networkd[1228]: lxcde2833ebb22a: Link UP Sep 13 00:17:52.683637 systemd-networkd[1228]: lxc760027e12fa0: Gained carrier Sep 13 00:17:52.684590 kernel: eth0: renamed from tmp10fc0 Sep 13 00:17:52.692253 systemd-networkd[1228]: lxcde2833ebb22a: Gained carrier Sep 13 00:17:53.177058 systemd-networkd[1228]: cilium_vxlan: Gained IPv6LL Sep 13 00:17:53.881040 systemd-networkd[1228]: lxc760027e12fa0: Gained IPv6LL Sep 13 00:17:53.945152 systemd-networkd[1228]: lxc_health: Gained IPv6LL Sep 13 00:17:54.164419 kubelet[2614]: E0913 00:17:54.164288 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:54.585035 systemd-networkd[1228]: lxcde2833ebb22a: Gained IPv6LL Sep 13 00:17:55.112526 kubelet[2614]: E0913 00:17:55.112461 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:56.240636 containerd[1537]: time="2025-09-13T00:17:56.240322305Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:56.240636 containerd[1537]: time="2025-09-13T00:17:56.240369623Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:56.240636 containerd[1537]: time="2025-09-13T00:17:56.240380903Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:56.240636 containerd[1537]: time="2025-09-13T00:17:56.240455539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:56.241314 containerd[1537]: time="2025-09-13T00:17:56.241087910Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:17:56.241314 containerd[1537]: time="2025-09-13T00:17:56.241150507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:17:56.241314 containerd[1537]: time="2025-09-13T00:17:56.241164906Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:56.241314 containerd[1537]: time="2025-09-13T00:17:56.241242423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:17:56.265884 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:56.266652 systemd-resolved[1433]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 13 00:17:56.286698 containerd[1537]: time="2025-09-13T00:17:56.286663356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dplx2,Uid:12f517b9-7267-4846-b45b-3a0eb828eed7,Namespace:kube-system,Attempt:0,} returns sandbox id \"10fc0d6f1a03244c51feeace214d72810ac389856aa6c7bc6aee5fcf9325e6f9\"" Sep 13 00:17:56.287281 containerd[1537]: time="2025-09-13T00:17:56.287070297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-j2w8s,Uid:6f7b05ab-9c04-47ff-b666-abe821a60787,Namespace:kube-system,Attempt:0,} returns sandbox id \"6831014e44003fceccbe5a1b4fe6b690fb75d718934b36c7a5c5e032a2fabdc9\"" Sep 13 00:17:56.287804 kubelet[2614]: E0913 00:17:56.287778 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:56.288088 kubelet[2614]: E0913 00:17:56.287973 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:56.290464 containerd[1537]: time="2025-09-13T00:17:56.290435701Z" level=info msg="CreateContainer within sandbox \"6831014e44003fceccbe5a1b4fe6b690fb75d718934b36c7a5c5e032a2fabdc9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:17:56.291491 containerd[1537]: time="2025-09-13T00:17:56.291425815Z" level=info msg="CreateContainer within sandbox \"10fc0d6f1a03244c51feeace214d72810ac389856aa6c7bc6aee5fcf9325e6f9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 13 00:17:56.302588 containerd[1537]: time="2025-09-13T00:17:56.302466743Z" level=info msg="CreateContainer within sandbox \"6831014e44003fceccbe5a1b4fe6b690fb75d718934b36c7a5c5e032a2fabdc9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a0d48b54f2b343cd819cb404bb3cfe03149add15b6a7425f2610f222a81f130e\"" Sep 13 00:17:56.303047 containerd[1537]: time="2025-09-13T00:17:56.302975919Z" level=info msg="StartContainer for \"a0d48b54f2b343cd819cb404bb3cfe03149add15b6a7425f2610f222a81f130e\"" Sep 13 00:17:56.305342 containerd[1537]: time="2025-09-13T00:17:56.305303371Z" level=info msg="CreateContainer within sandbox \"10fc0d6f1a03244c51feeace214d72810ac389856aa6c7bc6aee5fcf9325e6f9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6bcdb0425ed619a6d83b92983477d7aa67fd0711f19a482008f3e8ff301705f4\"" Sep 13 00:17:56.305759 containerd[1537]: time="2025-09-13T00:17:56.305727112Z" level=info msg="StartContainer for \"6bcdb0425ed619a6d83b92983477d7aa67fd0711f19a482008f3e8ff301705f4\"" Sep 13 00:17:56.354855 containerd[1537]: time="2025-09-13T00:17:56.354738479Z" level=info msg="StartContainer for \"a0d48b54f2b343cd819cb404bb3cfe03149add15b6a7425f2610f222a81f130e\" returns successfully" Sep 13 00:17:56.355027 containerd[1537]: time="2025-09-13T00:17:56.354893111Z" level=info msg="StartContainer for \"6bcdb0425ed619a6d83b92983477d7aa67fd0711f19a482008f3e8ff301705f4\" returns successfully" Sep 13 00:17:57.119077 kubelet[2614]: E0913 00:17:57.119029 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:57.122813 kubelet[2614]: E0913 00:17:57.122772 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:57.146329 kubelet[2614]: I0913 00:17:57.146263 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dplx2" podStartSLOduration=18.146244747 podStartE2EDuration="18.146244747s" podCreationTimestamp="2025-09-13 00:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:57.144142278 +0000 UTC m=+23.200092217" watchObservedRunningTime="2025-09-13 00:17:57.146244747 +0000 UTC m=+23.202194686" Sep 13 00:17:57.148128 kubelet[2614]: I0913 00:17:57.148069 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-j2w8s" podStartSLOduration=18.148056748 podStartE2EDuration="18.148056748s" podCreationTimestamp="2025-09-13 00:17:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:17:57.133916843 +0000 UTC m=+23.189866782" watchObservedRunningTime="2025-09-13 00:17:57.148056748 +0000 UTC m=+23.204006727" Sep 13 00:17:58.124629 kubelet[2614]: E0913 00:17:58.124259 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:58.124629 kubelet[2614]: E0913 00:17:58.124511 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:59.126317 kubelet[2614]: E0913 00:17:59.126288 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:17:59.126317 kubelet[2614]: E0913 00:17:59.126321 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:02.501776 systemd[1]: Started sshd@7-10.0.0.134:22-10.0.0.1:47548.service - OpenSSH per-connection server daemon (10.0.0.1:47548). Sep 13 00:18:02.533084 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 47548 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:02.534546 sshd[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:02.538596 systemd-logind[1524]: New session 8 of user core. Sep 13 00:18:02.556754 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 13 00:18:02.679578 sshd[4008]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:02.682814 systemd[1]: sshd@7-10.0.0.134:22-10.0.0.1:47548.service: Deactivated successfully. Sep 13 00:18:02.684973 systemd-logind[1524]: Session 8 logged out. Waiting for processes to exit. Sep 13 00:18:02.685232 systemd[1]: session-8.scope: Deactivated successfully. Sep 13 00:18:02.686195 systemd-logind[1524]: Removed session 8. Sep 13 00:18:07.688763 systemd[1]: Started sshd@8-10.0.0.134:22-10.0.0.1:47562.service - OpenSSH per-connection server daemon (10.0.0.1:47562). Sep 13 00:18:07.717769 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 47562 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:07.718950 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:07.723568 systemd-logind[1524]: New session 9 of user core. Sep 13 00:18:07.731861 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 13 00:18:07.875747 sshd[4025]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:07.878897 systemd[1]: sshd@8-10.0.0.134:22-10.0.0.1:47562.service: Deactivated successfully. Sep 13 00:18:07.881978 systemd-logind[1524]: Session 9 logged out. Waiting for processes to exit. Sep 13 00:18:07.882288 systemd[1]: session-9.scope: Deactivated successfully. Sep 13 00:18:07.883904 systemd-logind[1524]: Removed session 9. Sep 13 00:18:12.891800 systemd[1]: Started sshd@9-10.0.0.134:22-10.0.0.1:44894.service - OpenSSH per-connection server daemon (10.0.0.1:44894). Sep 13 00:18:12.930748 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 44894 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:12.932078 sshd[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:12.938118 systemd-logind[1524]: New session 10 of user core. Sep 13 00:18:12.949801 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 13 00:18:13.063323 sshd[4043]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:13.067482 systemd[1]: sshd@9-10.0.0.134:22-10.0.0.1:44894.service: Deactivated successfully. Sep 13 00:18:13.071198 systemd[1]: session-10.scope: Deactivated successfully. Sep 13 00:18:13.071432 systemd-logind[1524]: Session 10 logged out. Waiting for processes to exit. Sep 13 00:18:13.074081 systemd-logind[1524]: Removed session 10. Sep 13 00:18:18.080485 systemd[1]: Started sshd@10-10.0.0.134:22-10.0.0.1:44906.service - OpenSSH per-connection server daemon (10.0.0.1:44906). Sep 13 00:18:18.122918 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 44906 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:18.124421 sshd[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:18.129028 systemd-logind[1524]: New session 11 of user core. Sep 13 00:18:18.143565 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 13 00:18:18.275951 sshd[4059]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:18.290839 systemd[1]: Started sshd@11-10.0.0.134:22-10.0.0.1:44922.service - OpenSSH per-connection server daemon (10.0.0.1:44922). Sep 13 00:18:18.291361 systemd[1]: sshd@10-10.0.0.134:22-10.0.0.1:44906.service: Deactivated successfully. Sep 13 00:18:18.292927 systemd[1]: session-11.scope: Deactivated successfully. Sep 13 00:18:18.294255 systemd-logind[1524]: Session 11 logged out. Waiting for processes to exit. Sep 13 00:18:18.295429 systemd-logind[1524]: Removed session 11. Sep 13 00:18:18.326026 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 44922 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:18.327372 sshd[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:18.332150 systemd-logind[1524]: New session 12 of user core. Sep 13 00:18:18.344801 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 13 00:18:18.516900 sshd[4074]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:18.524881 systemd[1]: Started sshd@12-10.0.0.134:22-10.0.0.1:44924.service - OpenSSH per-connection server daemon (10.0.0.1:44924). Sep 13 00:18:18.527252 systemd-logind[1524]: Session 12 logged out. Waiting for processes to exit. Sep 13 00:18:18.534303 systemd[1]: sshd@11-10.0.0.134:22-10.0.0.1:44922.service: Deactivated successfully. Sep 13 00:18:18.538711 systemd[1]: session-12.scope: Deactivated successfully. Sep 13 00:18:18.543015 systemd-logind[1524]: Removed session 12. Sep 13 00:18:18.565016 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 44924 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:18.566295 sshd[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:18.569885 systemd-logind[1524]: New session 13 of user core. Sep 13 00:18:18.581807 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 13 00:18:18.694924 sshd[4086]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:18.699121 systemd[1]: sshd@12-10.0.0.134:22-10.0.0.1:44924.service: Deactivated successfully. Sep 13 00:18:18.701328 systemd-logind[1524]: Session 13 logged out. Waiting for processes to exit. Sep 13 00:18:18.701460 systemd[1]: session-13.scope: Deactivated successfully. Sep 13 00:18:18.702709 systemd-logind[1524]: Removed session 13. Sep 13 00:18:23.709739 systemd[1]: Started sshd@13-10.0.0.134:22-10.0.0.1:38452.service - OpenSSH per-connection server daemon (10.0.0.1:38452). Sep 13 00:18:23.739183 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 38452 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:23.740398 sshd[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:23.744180 systemd-logind[1524]: New session 14 of user core. Sep 13 00:18:23.758802 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 13 00:18:23.865376 sshd[4105]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:23.869089 systemd[1]: sshd@13-10.0.0.134:22-10.0.0.1:38452.service: Deactivated successfully. Sep 13 00:18:23.872103 systemd-logind[1524]: Session 14 logged out. Waiting for processes to exit. Sep 13 00:18:23.872595 systemd[1]: session-14.scope: Deactivated successfully. Sep 13 00:18:23.873455 systemd-logind[1524]: Removed session 14. Sep 13 00:18:28.877887 systemd[1]: Started sshd@14-10.0.0.134:22-10.0.0.1:38464.service - OpenSSH per-connection server daemon (10.0.0.1:38464). Sep 13 00:18:28.907102 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 38464 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:28.908371 sshd[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:28.913236 systemd-logind[1524]: New session 15 of user core. Sep 13 00:18:28.923824 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 13 00:18:29.039593 sshd[4120]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:29.058755 systemd[1]: Started sshd@15-10.0.0.134:22-10.0.0.1:38474.service - OpenSSH per-connection server daemon (10.0.0.1:38474). Sep 13 00:18:29.059585 systemd[1]: sshd@14-10.0.0.134:22-10.0.0.1:38464.service: Deactivated successfully. Sep 13 00:18:29.061121 systemd[1]: session-15.scope: Deactivated successfully. Sep 13 00:18:29.062559 systemd-logind[1524]: Session 15 logged out. Waiting for processes to exit. Sep 13 00:18:29.063528 systemd-logind[1524]: Removed session 15. Sep 13 00:18:29.088030 sshd[4133]: Accepted publickey for core from 10.0.0.1 port 38474 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:29.089229 sshd[4133]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:29.095297 systemd-logind[1524]: New session 16 of user core. Sep 13 00:18:29.103743 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 13 00:18:29.351318 sshd[4133]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:29.360746 systemd[1]: Started sshd@16-10.0.0.134:22-10.0.0.1:38490.service - OpenSSH per-connection server daemon (10.0.0.1:38490). Sep 13 00:18:29.361104 systemd[1]: sshd@15-10.0.0.134:22-10.0.0.1:38474.service: Deactivated successfully. Sep 13 00:18:29.364061 systemd[1]: session-16.scope: Deactivated successfully. Sep 13 00:18:29.364838 systemd-logind[1524]: Session 16 logged out. Waiting for processes to exit. Sep 13 00:18:29.366173 systemd-logind[1524]: Removed session 16. Sep 13 00:18:29.395127 sshd[4145]: Accepted publickey for core from 10.0.0.1 port 38490 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:29.396699 sshd[4145]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:29.400550 systemd-logind[1524]: New session 17 of user core. Sep 13 00:18:29.408734 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 13 00:18:30.578349 sshd[4145]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:30.588019 systemd[1]: Started sshd@17-10.0.0.134:22-10.0.0.1:37458.service - OpenSSH per-connection server daemon (10.0.0.1:37458). Sep 13 00:18:30.588906 systemd[1]: sshd@16-10.0.0.134:22-10.0.0.1:38490.service: Deactivated successfully. Sep 13 00:18:30.592616 systemd[1]: session-17.scope: Deactivated successfully. Sep 13 00:18:30.593326 systemd-logind[1524]: Session 17 logged out. Waiting for processes to exit. Sep 13 00:18:30.600661 systemd-logind[1524]: Removed session 17. Sep 13 00:18:30.631180 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 37458 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:30.632474 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:30.636075 systemd-logind[1524]: New session 18 of user core. Sep 13 00:18:30.646796 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 13 00:18:30.872565 sshd[4164]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:30.879947 systemd[1]: sshd@17-10.0.0.134:22-10.0.0.1:37458.service: Deactivated successfully. Sep 13 00:18:30.882372 systemd[1]: session-18.scope: Deactivated successfully. Sep 13 00:18:30.887272 systemd-logind[1524]: Session 18 logged out. Waiting for processes to exit. Sep 13 00:18:30.896777 systemd[1]: Started sshd@18-10.0.0.134:22-10.0.0.1:37470.service - OpenSSH per-connection server daemon (10.0.0.1:37470). Sep 13 00:18:30.897742 systemd-logind[1524]: Removed session 18. Sep 13 00:18:30.942191 sshd[4182]: Accepted publickey for core from 10.0.0.1 port 37470 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:30.944102 sshd[4182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:30.948986 systemd-logind[1524]: New session 19 of user core. Sep 13 00:18:30.966828 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 13 00:18:31.090918 sshd[4182]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:31.096279 systemd[1]: sshd@18-10.0.0.134:22-10.0.0.1:37470.service: Deactivated successfully. Sep 13 00:18:31.099299 systemd-logind[1524]: Session 19 logged out. Waiting for processes to exit. Sep 13 00:18:31.100183 systemd[1]: session-19.scope: Deactivated successfully. Sep 13 00:18:31.101469 systemd-logind[1524]: Removed session 19. Sep 13 00:18:36.101732 systemd[1]: Started sshd@19-10.0.0.134:22-10.0.0.1:37474.service - OpenSSH per-connection server daemon (10.0.0.1:37474). Sep 13 00:18:36.130661 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 37474 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:36.131977 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:36.136027 systemd-logind[1524]: New session 20 of user core. Sep 13 00:18:36.146738 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 13 00:18:36.255488 sshd[4202]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:36.258772 systemd[1]: sshd@19-10.0.0.134:22-10.0.0.1:37474.service: Deactivated successfully. Sep 13 00:18:36.260848 systemd-logind[1524]: Session 20 logged out. Waiting for processes to exit. Sep 13 00:18:36.261017 systemd[1]: session-20.scope: Deactivated successfully. Sep 13 00:18:36.262033 systemd-logind[1524]: Removed session 20. Sep 13 00:18:41.267736 systemd[1]: Started sshd@20-10.0.0.134:22-10.0.0.1:39206.service - OpenSSH per-connection server daemon (10.0.0.1:39206). Sep 13 00:18:41.296569 sshd[4219]: Accepted publickey for core from 10.0.0.1 port 39206 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:41.297822 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:41.301923 systemd-logind[1524]: New session 21 of user core. Sep 13 00:18:41.312744 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 13 00:18:41.416617 sshd[4219]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:41.419700 systemd[1]: sshd@20-10.0.0.134:22-10.0.0.1:39206.service: Deactivated successfully. Sep 13 00:18:41.421769 systemd[1]: session-21.scope: Deactivated successfully. Sep 13 00:18:41.422324 systemd-logind[1524]: Session 21 logged out. Waiting for processes to exit. Sep 13 00:18:41.423205 systemd-logind[1524]: Removed session 21. Sep 13 00:18:46.426737 systemd[1]: Started sshd@21-10.0.0.134:22-10.0.0.1:39212.service - OpenSSH per-connection server daemon (10.0.0.1:39212). Sep 13 00:18:46.455799 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 39212 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:46.456925 sshd[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:46.460383 systemd-logind[1524]: New session 22 of user core. Sep 13 00:18:46.471729 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 13 00:18:46.576610 sshd[4234]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:46.579700 systemd[1]: sshd@21-10.0.0.134:22-10.0.0.1:39212.service: Deactivated successfully. Sep 13 00:18:46.582298 systemd[1]: session-22.scope: Deactivated successfully. Sep 13 00:18:46.583187 systemd-logind[1524]: Session 22 logged out. Waiting for processes to exit. Sep 13 00:18:46.593730 systemd[1]: Started sshd@22-10.0.0.134:22-10.0.0.1:39224.service - OpenSSH per-connection server daemon (10.0.0.1:39224). Sep 13 00:18:46.595553 systemd-logind[1524]: Removed session 22. Sep 13 00:18:46.624197 sshd[4250]: Accepted publickey for core from 10.0.0.1 port 39224 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:46.625435 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:46.629176 systemd-logind[1524]: New session 23 of user core. Sep 13 00:18:46.636718 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 13 00:18:48.569323 containerd[1537]: time="2025-09-13T00:18:48.569279770Z" level=info msg="StopContainer for \"4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b\" with timeout 30 (s)" Sep 13 00:18:48.570319 containerd[1537]: time="2025-09-13T00:18:48.570280172Z" level=info msg="Stop container \"4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b\" with signal terminated" Sep 13 00:18:48.618185 containerd[1537]: time="2025-09-13T00:18:48.618146759Z" level=info msg="StopContainer for \"555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38\" with timeout 2 (s)" Sep 13 00:18:48.618562 containerd[1537]: time="2025-09-13T00:18:48.618538959Z" level=info msg="Stop container \"555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38\" with signal terminated" Sep 13 00:18:48.625020 systemd-networkd[1228]: lxc_health: Link DOWN Sep 13 00:18:48.625026 systemd-networkd[1228]: lxc_health: Lost carrier Sep 13 00:18:48.631119 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b-rootfs.mount: Deactivated successfully. Sep 13 00:18:48.631563 containerd[1537]: time="2025-09-13T00:18:48.631381817Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 13 00:18:48.640658 containerd[1537]: time="2025-09-13T00:18:48.640602750Z" level=info msg="shim disconnected" id=4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b namespace=k8s.io Sep 13 00:18:48.641093 containerd[1537]: time="2025-09-13T00:18:48.640729350Z" level=warning msg="cleaning up after shim disconnected" id=4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b namespace=k8s.io Sep 13 00:18:48.641093 containerd[1537]: time="2025-09-13T00:18:48.640878990Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:18:48.680836 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38-rootfs.mount: Deactivated successfully. Sep 13 00:18:48.688015 containerd[1537]: time="2025-09-13T00:18:48.687959616Z" level=info msg="shim disconnected" id=555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38 namespace=k8s.io Sep 13 00:18:48.688015 containerd[1537]: time="2025-09-13T00:18:48.688014296Z" level=warning msg="cleaning up after shim disconnected" id=555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38 namespace=k8s.io Sep 13 00:18:48.688015 containerd[1537]: time="2025-09-13T00:18:48.688023336Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:18:48.694965 containerd[1537]: time="2025-09-13T00:18:48.694915946Z" level=info msg="StopContainer for \"4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b\" returns successfully" Sep 13 00:18:48.695747 containerd[1537]: time="2025-09-13T00:18:48.695719307Z" level=info msg="StopPodSandbox for \"11f6b8a835a5830434f4444f3c1b3790afa841dcf89d756e92d577370f26a708\"" Sep 13 00:18:48.695799 containerd[1537]: time="2025-09-13T00:18:48.695760947Z" level=info msg="Container to stop \"4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:18:48.697462 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-11f6b8a835a5830434f4444f3c1b3790afa841dcf89d756e92d577370f26a708-shm.mount: Deactivated successfully. Sep 13 00:18:48.710999 containerd[1537]: time="2025-09-13T00:18:48.710665648Z" level=info msg="StopContainer for \"555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38\" returns successfully" Sep 13 00:18:48.711302 containerd[1537]: time="2025-09-13T00:18:48.711273969Z" level=info msg="StopPodSandbox for \"490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87\"" Sep 13 00:18:48.712583 containerd[1537]: time="2025-09-13T00:18:48.712553131Z" level=info msg="Container to stop \"bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:18:48.712680 containerd[1537]: time="2025-09-13T00:18:48.712665251Z" level=info msg="Container to stop \"5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:18:48.712742 containerd[1537]: time="2025-09-13T00:18:48.712727811Z" level=info msg="Container to stop \"b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:18:48.712792 containerd[1537]: time="2025-09-13T00:18:48.712779731Z" level=info msg="Container to stop \"555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:18:48.712846 containerd[1537]: time="2025-09-13T00:18:48.712832291Z" level=info msg="Container to stop \"0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 13 00:18:48.714616 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87-shm.mount: Deactivated successfully. Sep 13 00:18:48.740806 containerd[1537]: time="2025-09-13T00:18:48.740748650Z" level=info msg="shim disconnected" id=490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87 namespace=k8s.io Sep 13 00:18:48.740806 containerd[1537]: time="2025-09-13T00:18:48.740801250Z" level=warning msg="cleaning up after shim disconnected" id=490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87 namespace=k8s.io Sep 13 00:18:48.740806 containerd[1537]: time="2025-09-13T00:18:48.740810530Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:18:48.741650 containerd[1537]: time="2025-09-13T00:18:48.741351611Z" level=info msg="shim disconnected" id=11f6b8a835a5830434f4444f3c1b3790afa841dcf89d756e92d577370f26a708 namespace=k8s.io Sep 13 00:18:48.742596 containerd[1537]: time="2025-09-13T00:18:48.742430853Z" level=warning msg="cleaning up after shim disconnected" id=11f6b8a835a5830434f4444f3c1b3790afa841dcf89d756e92d577370f26a708 namespace=k8s.io Sep 13 00:18:48.742596 containerd[1537]: time="2025-09-13T00:18:48.742454133Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:18:48.759806 containerd[1537]: time="2025-09-13T00:18:48.759762877Z" level=warning msg="cleanup warnings time=\"2025-09-13T00:18:48Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Sep 13 00:18:48.760951 containerd[1537]: time="2025-09-13T00:18:48.760924759Z" level=info msg="TearDown network for sandbox \"490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87\" successfully" Sep 13 00:18:48.761004 containerd[1537]: time="2025-09-13T00:18:48.760951799Z" level=info msg="StopPodSandbox for \"490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87\" returns successfully" Sep 13 00:18:48.778207 containerd[1537]: time="2025-09-13T00:18:48.777253181Z" level=info msg="TearDown network for sandbox \"11f6b8a835a5830434f4444f3c1b3790afa841dcf89d756e92d577370f26a708\" successfully" Sep 13 00:18:48.778207 containerd[1537]: time="2025-09-13T00:18:48.777292981Z" level=info msg="StopPodSandbox for \"11f6b8a835a5830434f4444f3c1b3790afa841dcf89d756e92d577370f26a708\" returns successfully" Sep 13 00:18:48.931285 kubelet[2614]: I0913 00:18:48.931124 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-etc-cni-netd\") pod \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " Sep 13 00:18:48.931285 kubelet[2614]: I0913 00:18:48.931277 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/905074f0-ef7c-4402-8c5f-8f7737a5b78a-clustermesh-secrets\") pod \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " Sep 13 00:18:48.931967 kubelet[2614]: I0913 00:18:48.931303 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pw8n9\" (UniqueName: \"kubernetes.io/projected/7c0c532e-c9db-414e-8a19-4107ef34595c-kube-api-access-pw8n9\") pod \"7c0c532e-c9db-414e-8a19-4107ef34595c\" (UID: \"7c0c532e-c9db-414e-8a19-4107ef34595c\") " Sep 13 00:18:48.931967 kubelet[2614]: I0913 00:18:48.931320 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c0c532e-c9db-414e-8a19-4107ef34595c-cilium-config-path\") pod \"7c0c532e-c9db-414e-8a19-4107ef34595c\" (UID: \"7c0c532e-c9db-414e-8a19-4107ef34595c\") " Sep 13 00:18:48.931967 kubelet[2614]: I0913 00:18:48.931336 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-xtables-lock\") pod \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " Sep 13 00:18:48.932344 kubelet[2614]: I0913 00:18:48.931350 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cilium-cgroup\") pod \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " Sep 13 00:18:48.932344 kubelet[2614]: I0913 00:18:48.932075 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/905074f0-ef7c-4402-8c5f-8f7737a5b78a-hubble-tls\") pod \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " Sep 13 00:18:48.932344 kubelet[2614]: I0913 00:18:48.932095 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-lib-modules\") pod \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " Sep 13 00:18:48.932344 kubelet[2614]: I0913 00:18:48.932109 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-hostproc\") pod \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " Sep 13 00:18:48.932344 kubelet[2614]: I0913 00:18:48.932122 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-bpf-maps\") pod \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " Sep 13 00:18:48.932821 kubelet[2614]: I0913 00:18:48.932791 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cilium-run\") pod \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " Sep 13 00:18:48.932821 kubelet[2614]: I0913 00:18:48.932820 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cilium-config-path\") pod \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " Sep 13 00:18:48.932912 kubelet[2614]: I0913 00:18:48.932839 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-host-proc-sys-net\") pod \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " Sep 13 00:18:48.932912 kubelet[2614]: I0913 00:18:48.932856 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xbzgv\" (UniqueName: \"kubernetes.io/projected/905074f0-ef7c-4402-8c5f-8f7737a5b78a-kube-api-access-xbzgv\") pod \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " Sep 13 00:18:48.932912 kubelet[2614]: I0913 00:18:48.932881 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-host-proc-sys-kernel\") pod \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " Sep 13 00:18:48.932912 kubelet[2614]: I0913 00:18:48.932898 2614 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cni-path\") pod \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\" (UID: \"905074f0-ef7c-4402-8c5f-8f7737a5b78a\") " Sep 13 00:18:48.934348 kubelet[2614]: I0913 00:18:48.934065 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cni-path" (OuterVolumeSpecName: "cni-path") pod "905074f0-ef7c-4402-8c5f-8f7737a5b78a" (UID: "905074f0-ef7c-4402-8c5f-8f7737a5b78a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:18:48.934348 kubelet[2614]: I0913 00:18:48.934162 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "905074f0-ef7c-4402-8c5f-8f7737a5b78a" (UID: "905074f0-ef7c-4402-8c5f-8f7737a5b78a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:18:48.936210 kubelet[2614]: I0913 00:18:48.934607 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "905074f0-ef7c-4402-8c5f-8f7737a5b78a" (UID: "905074f0-ef7c-4402-8c5f-8f7737a5b78a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:18:48.936210 kubelet[2614]: I0913 00:18:48.934645 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "905074f0-ef7c-4402-8c5f-8f7737a5b78a" (UID: "905074f0-ef7c-4402-8c5f-8f7737a5b78a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:18:48.936210 kubelet[2614]: I0913 00:18:48.934662 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "905074f0-ef7c-4402-8c5f-8f7737a5b78a" (UID: "905074f0-ef7c-4402-8c5f-8f7737a5b78a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:18:48.936210 kubelet[2614]: I0913 00:18:48.934664 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "905074f0-ef7c-4402-8c5f-8f7737a5b78a" (UID: "905074f0-ef7c-4402-8c5f-8f7737a5b78a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:18:48.936210 kubelet[2614]: I0913 00:18:48.934687 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "905074f0-ef7c-4402-8c5f-8f7737a5b78a" (UID: "905074f0-ef7c-4402-8c5f-8f7737a5b78a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:18:48.936425 kubelet[2614]: I0913 00:18:48.934676 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-hostproc" (OuterVolumeSpecName: "hostproc") pod "905074f0-ef7c-4402-8c5f-8f7737a5b78a" (UID: "905074f0-ef7c-4402-8c5f-8f7737a5b78a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:18:48.936425 kubelet[2614]: I0913 00:18:48.934707 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "905074f0-ef7c-4402-8c5f-8f7737a5b78a" (UID: "905074f0-ef7c-4402-8c5f-8f7737a5b78a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:18:48.936744 kubelet[2614]: I0913 00:18:48.936713 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/905074f0-ef7c-4402-8c5f-8f7737a5b78a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "905074f0-ef7c-4402-8c5f-8f7737a5b78a" (UID: "905074f0-ef7c-4402-8c5f-8f7737a5b78a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:18:48.936812 kubelet[2614]: I0913 00:18:48.936784 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c0c532e-c9db-414e-8a19-4107ef34595c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7c0c532e-c9db-414e-8a19-4107ef34595c" (UID: "7c0c532e-c9db-414e-8a19-4107ef34595c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:18:48.936850 kubelet[2614]: I0913 00:18:48.936782 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c0c532e-c9db-414e-8a19-4107ef34595c-kube-api-access-pw8n9" (OuterVolumeSpecName: "kube-api-access-pw8n9") pod "7c0c532e-c9db-414e-8a19-4107ef34595c" (UID: "7c0c532e-c9db-414e-8a19-4107ef34595c"). InnerVolumeSpecName "kube-api-access-pw8n9". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:18:48.936850 kubelet[2614]: I0913 00:18:48.936842 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "905074f0-ef7c-4402-8c5f-8f7737a5b78a" (UID: "905074f0-ef7c-4402-8c5f-8f7737a5b78a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Sep 13 00:18:48.937268 kubelet[2614]: I0913 00:18:48.937236 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "905074f0-ef7c-4402-8c5f-8f7737a5b78a" (UID: "905074f0-ef7c-4402-8c5f-8f7737a5b78a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Sep 13 00:18:48.938790 kubelet[2614]: I0913 00:18:48.938764 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/905074f0-ef7c-4402-8c5f-8f7737a5b78a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "905074f0-ef7c-4402-8c5f-8f7737a5b78a" (UID: "905074f0-ef7c-4402-8c5f-8f7737a5b78a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Sep 13 00:18:48.938790 kubelet[2614]: I0913 00:18:48.938776 2614 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/905074f0-ef7c-4402-8c5f-8f7737a5b78a-kube-api-access-xbzgv" (OuterVolumeSpecName: "kube-api-access-xbzgv") pod "905074f0-ef7c-4402-8c5f-8f7737a5b78a" (UID: "905074f0-ef7c-4402-8c5f-8f7737a5b78a"). InnerVolumeSpecName "kube-api-access-xbzgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Sep 13 00:18:49.033778 kubelet[2614]: I0913 00:18:49.033729 2614 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.033778 kubelet[2614]: I0913 00:18:49.033765 2614 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.033778 kubelet[2614]: I0913 00:18:49.033775 2614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xbzgv\" (UniqueName: \"kubernetes.io/projected/905074f0-ef7c-4402-8c5f-8f7737a5b78a-kube-api-access-xbzgv\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.033778 kubelet[2614]: I0913 00:18:49.033785 2614 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.033778 kubelet[2614]: I0913 00:18:49.033794 2614 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.033982 kubelet[2614]: I0913 00:18:49.033802 2614 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.033982 kubelet[2614]: I0913 00:18:49.033810 2614 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/905074f0-ef7c-4402-8c5f-8f7737a5b78a-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.033982 kubelet[2614]: I0913 00:18:49.033819 2614 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pw8n9\" (UniqueName: \"kubernetes.io/projected/7c0c532e-c9db-414e-8a19-4107ef34595c-kube-api-access-pw8n9\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.033982 kubelet[2614]: I0913 00:18:49.033826 2614 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c0c532e-c9db-414e-8a19-4107ef34595c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.033982 kubelet[2614]: I0913 00:18:49.033834 2614 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.033982 kubelet[2614]: I0913 00:18:49.033841 2614 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.033982 kubelet[2614]: I0913 00:18:49.033848 2614 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.033982 kubelet[2614]: I0913 00:18:49.033857 2614 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/905074f0-ef7c-4402-8c5f-8f7737a5b78a-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.034164 kubelet[2614]: I0913 00:18:49.033865 2614 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.034164 kubelet[2614]: I0913 00:18:49.033871 2614 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.034164 kubelet[2614]: I0913 00:18:49.033880 2614 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/905074f0-ef7c-4402-8c5f-8f7737a5b78a-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 13 00:18:49.077692 kubelet[2614]: E0913 00:18:49.077645 2614 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:18:49.256538 kubelet[2614]: I0913 00:18:49.256425 2614 scope.go:117] "RemoveContainer" containerID="4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b" Sep 13 00:18:49.259552 containerd[1537]: time="2025-09-13T00:18:49.259313728Z" level=info msg="RemoveContainer for \"4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b\"" Sep 13 00:18:49.262170 containerd[1537]: time="2025-09-13T00:18:49.262122252Z" level=info msg="RemoveContainer for \"4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b\" returns successfully" Sep 13 00:18:49.275852 kubelet[2614]: I0913 00:18:49.275816 2614 scope.go:117] "RemoveContainer" containerID="4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b" Sep 13 00:18:49.276100 containerd[1537]: time="2025-09-13T00:18:49.276054951Z" level=error msg="ContainerStatus for \"4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b\": not found" Sep 13 00:18:49.278160 kubelet[2614]: E0913 00:18:49.278121 2614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b\": not found" containerID="4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b" Sep 13 00:18:49.278269 kubelet[2614]: I0913 00:18:49.278187 2614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b"} err="failed to get container status \"4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"4f856f45d33d022ff28a6085bf7a38d45717f237a6d466c13d4be52422a96d8b\": not found" Sep 13 00:18:49.278301 kubelet[2614]: I0913 00:18:49.278271 2614 scope.go:117] "RemoveContainer" containerID="555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38" Sep 13 00:18:49.280381 containerd[1537]: time="2025-09-13T00:18:49.280261637Z" level=info msg="RemoveContainer for \"555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38\"" Sep 13 00:18:49.288369 containerd[1537]: time="2025-09-13T00:18:49.288317408Z" level=info msg="RemoveContainer for \"555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38\" returns successfully" Sep 13 00:18:49.289515 kubelet[2614]: I0913 00:18:49.289471 2614 scope.go:117] "RemoveContainer" containerID="0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23" Sep 13 00:18:49.291257 containerd[1537]: time="2025-09-13T00:18:49.291231012Z" level=info msg="RemoveContainer for \"0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23\"" Sep 13 00:18:49.294510 containerd[1537]: time="2025-09-13T00:18:49.294471617Z" level=info msg="RemoveContainer for \"0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23\" returns successfully" Sep 13 00:18:49.294697 kubelet[2614]: I0913 00:18:49.294663 2614 scope.go:117] "RemoveContainer" containerID="b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53" Sep 13 00:18:49.295657 containerd[1537]: time="2025-09-13T00:18:49.295562698Z" level=info msg="RemoveContainer for \"b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53\"" Sep 13 00:18:49.298002 containerd[1537]: time="2025-09-13T00:18:49.297968661Z" level=info msg="RemoveContainer for \"b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53\" returns successfully" Sep 13 00:18:49.298177 kubelet[2614]: I0913 00:18:49.298147 2614 scope.go:117] "RemoveContainer" containerID="5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540" Sep 13 00:18:49.299104 containerd[1537]: time="2025-09-13T00:18:49.299077743Z" level=info msg="RemoveContainer for \"5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540\"" Sep 13 00:18:49.301328 containerd[1537]: time="2025-09-13T00:18:49.301302746Z" level=info msg="RemoveContainer for \"5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540\" returns successfully" Sep 13 00:18:49.301486 kubelet[2614]: I0913 00:18:49.301450 2614 scope.go:117] "RemoveContainer" containerID="bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e" Sep 13 00:18:49.302568 containerd[1537]: time="2025-09-13T00:18:49.302544028Z" level=info msg="RemoveContainer for \"bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e\"" Sep 13 00:18:49.304885 containerd[1537]: time="2025-09-13T00:18:49.304797191Z" level=info msg="RemoveContainer for \"bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e\" returns successfully" Sep 13 00:18:49.305003 kubelet[2614]: I0913 00:18:49.304958 2614 scope.go:117] "RemoveContainer" containerID="555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38" Sep 13 00:18:49.305205 containerd[1537]: time="2025-09-13T00:18:49.305174791Z" level=error msg="ContainerStatus for \"555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38\": not found" Sep 13 00:18:49.305370 kubelet[2614]: E0913 00:18:49.305346 2614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38\": not found" containerID="555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38" Sep 13 00:18:49.305406 kubelet[2614]: I0913 00:18:49.305377 2614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38"} err="failed to get container status \"555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38\": rpc error: code = NotFound desc = an error occurred when try to find container \"555d05d7efbb12c1d5ae27dcf04c8c4726812ba2f5b9a82e8485e59152aa4d38\": not found" Sep 13 00:18:49.305406 kubelet[2614]: I0913 00:18:49.305399 2614 scope.go:117] "RemoveContainer" containerID="0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23" Sep 13 00:18:49.305616 containerd[1537]: time="2025-09-13T00:18:49.305583952Z" level=error msg="ContainerStatus for \"0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23\": not found" Sep 13 00:18:49.305854 kubelet[2614]: E0913 00:18:49.305832 2614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23\": not found" containerID="0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23" Sep 13 00:18:49.305903 kubelet[2614]: I0913 00:18:49.305859 2614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23"} err="failed to get container status \"0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23\": rpc error: code = NotFound desc = an error occurred when try to find container \"0686b4d00a03fbf88789dd9cfca8f7a5d76244e89c860a7c7d4a9f061d0a2e23\": not found" Sep 13 00:18:49.305903 kubelet[2614]: I0913 00:18:49.305887 2614 scope.go:117] "RemoveContainer" containerID="b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53" Sep 13 00:18:49.306080 containerd[1537]: time="2025-09-13T00:18:49.306047912Z" level=error msg="ContainerStatus for \"b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53\": not found" Sep 13 00:18:49.306181 kubelet[2614]: E0913 00:18:49.306162 2614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53\": not found" containerID="b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53" Sep 13 00:18:49.306218 kubelet[2614]: I0913 00:18:49.306195 2614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53"} err="failed to get container status \"b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53\": rpc error: code = NotFound desc = an error occurred when try to find container \"b092cf97b0c3052efc85d4a726deb055c0cefb23ef9209a0537b5d2e070a6b53\": not found" Sep 13 00:18:49.306218 kubelet[2614]: I0913 00:18:49.306213 2614 scope.go:117] "RemoveContainer" containerID="5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540" Sep 13 00:18:49.306643 containerd[1537]: time="2025-09-13T00:18:49.306382513Z" level=error msg="ContainerStatus for \"5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540\": not found" Sep 13 00:18:49.306718 kubelet[2614]: E0913 00:18:49.306524 2614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540\": not found" containerID="5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540" Sep 13 00:18:49.306718 kubelet[2614]: I0913 00:18:49.306549 2614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540"} err="failed to get container status \"5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540\": rpc error: code = NotFound desc = an error occurred when try to find container \"5c0f869dcc2b57c3ee7e67e978ae61a59ea8dd60b99d95c0ca9b0d8c12946540\": not found" Sep 13 00:18:49.306718 kubelet[2614]: I0913 00:18:49.306565 2614 scope.go:117] "RemoveContainer" containerID="bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e" Sep 13 00:18:49.306795 containerd[1537]: time="2025-09-13T00:18:49.306740073Z" level=error msg="ContainerStatus for \"bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e\": not found" Sep 13 00:18:49.306888 kubelet[2614]: E0913 00:18:49.306863 2614 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e\": not found" containerID="bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e" Sep 13 00:18:49.306925 kubelet[2614]: I0913 00:18:49.306892 2614 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e"} err="failed to get container status \"bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e\": rpc error: code = NotFound desc = an error occurred when try to find container \"bc01e30dc9de54b9262c4aba5ec7cdb861c7d9b8f586571a21e1b2d7bfd8054e\": not found" Sep 13 00:18:49.591279 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-11f6b8a835a5830434f4444f3c1b3790afa841dcf89d756e92d577370f26a708-rootfs.mount: Deactivated successfully. Sep 13 00:18:49.591420 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-490707f94cc03b1d9f1c7171e96d13b9e79382e70ae6667a3ae847374ba91a87-rootfs.mount: Deactivated successfully. Sep 13 00:18:49.591545 systemd[1]: var-lib-kubelet-pods-7c0c532e\x2dc9db\x2d414e\x2d8a19\x2d4107ef34595c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpw8n9.mount: Deactivated successfully. Sep 13 00:18:49.591630 systemd[1]: var-lib-kubelet-pods-905074f0\x2def7c\x2d4402\x2d8c5f\x2d8f7737a5b78a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxbzgv.mount: Deactivated successfully. Sep 13 00:18:49.591716 systemd[1]: var-lib-kubelet-pods-905074f0\x2def7c\x2d4402\x2d8c5f\x2d8f7737a5b78a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 13 00:18:49.591792 systemd[1]: var-lib-kubelet-pods-905074f0\x2def7c\x2d4402\x2d8c5f\x2d8f7737a5b78a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 13 00:18:50.029676 kubelet[2614]: I0913 00:18:50.028861 2614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c0c532e-c9db-414e-8a19-4107ef34595c" path="/var/lib/kubelet/pods/7c0c532e-c9db-414e-8a19-4107ef34595c/volumes" Sep 13 00:18:50.029676 kubelet[2614]: I0913 00:18:50.029246 2614 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="905074f0-ef7c-4402-8c5f-8f7737a5b78a" path="/var/lib/kubelet/pods/905074f0-ef7c-4402-8c5f-8f7737a5b78a/volumes" Sep 13 00:18:50.526908 sshd[4250]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:50.533722 systemd[1]: Started sshd@23-10.0.0.134:22-10.0.0.1:41724.service - OpenSSH per-connection server daemon (10.0.0.1:41724). Sep 13 00:18:50.534105 systemd[1]: sshd@22-10.0.0.134:22-10.0.0.1:39224.service: Deactivated successfully. Sep 13 00:18:50.536147 systemd-logind[1524]: Session 23 logged out. Waiting for processes to exit. Sep 13 00:18:50.536529 systemd[1]: session-23.scope: Deactivated successfully. Sep 13 00:18:50.537822 systemd-logind[1524]: Removed session 23. Sep 13 00:18:50.564317 sshd[4415]: Accepted publickey for core from 10.0.0.1 port 41724 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:50.565615 sshd[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:50.569600 systemd-logind[1524]: New session 24 of user core. Sep 13 00:18:50.578768 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 13 00:18:51.959980 sshd[4415]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:51.970604 systemd[1]: Started sshd@24-10.0.0.134:22-10.0.0.1:41726.service - OpenSSH per-connection server daemon (10.0.0.1:41726). Sep 13 00:18:51.972323 systemd[1]: sshd@23-10.0.0.134:22-10.0.0.1:41724.service: Deactivated successfully. Sep 13 00:18:51.974409 kubelet[2614]: E0913 00:18:51.973994 2614 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="905074f0-ef7c-4402-8c5f-8f7737a5b78a" containerName="apply-sysctl-overwrites" Sep 13 00:18:51.974409 kubelet[2614]: E0913 00:18:51.974034 2614 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="905074f0-ef7c-4402-8c5f-8f7737a5b78a" containerName="mount-cgroup" Sep 13 00:18:51.974409 kubelet[2614]: E0913 00:18:51.974043 2614 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="905074f0-ef7c-4402-8c5f-8f7737a5b78a" containerName="mount-bpf-fs" Sep 13 00:18:51.974409 kubelet[2614]: E0913 00:18:51.974049 2614 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c0c532e-c9db-414e-8a19-4107ef34595c" containerName="cilium-operator" Sep 13 00:18:51.974409 kubelet[2614]: E0913 00:18:51.974054 2614 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="905074f0-ef7c-4402-8c5f-8f7737a5b78a" containerName="clean-cilium-state" Sep 13 00:18:51.974409 kubelet[2614]: E0913 00:18:51.974060 2614 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="905074f0-ef7c-4402-8c5f-8f7737a5b78a" containerName="cilium-agent" Sep 13 00:18:51.974409 kubelet[2614]: I0913 00:18:51.974083 2614 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c0c532e-c9db-414e-8a19-4107ef34595c" containerName="cilium-operator" Sep 13 00:18:51.974409 kubelet[2614]: I0913 00:18:51.974090 2614 memory_manager.go:354] "RemoveStaleState removing state" podUID="905074f0-ef7c-4402-8c5f-8f7737a5b78a" containerName="cilium-agent" Sep 13 00:18:51.978764 systemd[1]: session-24.scope: Deactivated successfully. Sep 13 00:18:51.981209 systemd-logind[1524]: Session 24 logged out. Waiting for processes to exit. Sep 13 00:18:51.987260 systemd-logind[1524]: Removed session 24. Sep 13 00:18:52.020233 sshd[4429]: Accepted publickey for core from 10.0.0.1 port 41726 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:52.021563 sshd[4429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:52.025939 systemd-logind[1524]: New session 25 of user core. Sep 13 00:18:52.042867 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 13 00:18:52.051624 kubelet[2614]: I0913 00:18:52.051588 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f9c5ea1-cb8d-4c07-868f-c694c9a02888-cilium-config-path\") pod \"cilium-btzcd\" (UID: \"0f9c5ea1-cb8d-4c07-868f-c694c9a02888\") " pod="kube-system/cilium-btzcd" Sep 13 00:18:52.051733 kubelet[2614]: I0913 00:18:52.051629 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0f9c5ea1-cb8d-4c07-868f-c694c9a02888-cilium-ipsec-secrets\") pod \"cilium-btzcd\" (UID: \"0f9c5ea1-cb8d-4c07-868f-c694c9a02888\") " pod="kube-system/cilium-btzcd" Sep 13 00:18:52.051733 kubelet[2614]: I0913 00:18:52.051650 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f9c5ea1-cb8d-4c07-868f-c694c9a02888-xtables-lock\") pod \"cilium-btzcd\" (UID: \"0f9c5ea1-cb8d-4c07-868f-c694c9a02888\") " pod="kube-system/cilium-btzcd" Sep 13 00:18:52.051733 kubelet[2614]: I0913 00:18:52.051666 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m59r2\" (UniqueName: \"kubernetes.io/projected/0f9c5ea1-cb8d-4c07-868f-c694c9a02888-kube-api-access-m59r2\") pod \"cilium-btzcd\" (UID: \"0f9c5ea1-cb8d-4c07-868f-c694c9a02888\") " pod="kube-system/cilium-btzcd" Sep 13 00:18:52.051733 kubelet[2614]: I0913 00:18:52.051683 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0f9c5ea1-cb8d-4c07-868f-c694c9a02888-hostproc\") pod \"cilium-btzcd\" (UID: \"0f9c5ea1-cb8d-4c07-868f-c694c9a02888\") " pod="kube-system/cilium-btzcd" Sep 13 00:18:52.051733 kubelet[2614]: I0913 00:18:52.051697 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f9c5ea1-cb8d-4c07-868f-c694c9a02888-etc-cni-netd\") pod \"cilium-btzcd\" (UID: \"0f9c5ea1-cb8d-4c07-868f-c694c9a02888\") " pod="kube-system/cilium-btzcd" Sep 13 00:18:52.051733 kubelet[2614]: I0913 00:18:52.051711 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0f9c5ea1-cb8d-4c07-868f-c694c9a02888-hubble-tls\") pod \"cilium-btzcd\" (UID: \"0f9c5ea1-cb8d-4c07-868f-c694c9a02888\") " pod="kube-system/cilium-btzcd" Sep 13 00:18:52.051862 kubelet[2614]: I0913 00:18:52.051727 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0f9c5ea1-cb8d-4c07-868f-c694c9a02888-cilium-run\") pod \"cilium-btzcd\" (UID: \"0f9c5ea1-cb8d-4c07-868f-c694c9a02888\") " pod="kube-system/cilium-btzcd" Sep 13 00:18:52.051862 kubelet[2614]: I0913 00:18:52.051740 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0f9c5ea1-cb8d-4c07-868f-c694c9a02888-cni-path\") pod \"cilium-btzcd\" (UID: \"0f9c5ea1-cb8d-4c07-868f-c694c9a02888\") " pod="kube-system/cilium-btzcd" Sep 13 00:18:52.051862 kubelet[2614]: I0913 00:18:52.051755 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0f9c5ea1-cb8d-4c07-868f-c694c9a02888-host-proc-sys-net\") pod \"cilium-btzcd\" (UID: \"0f9c5ea1-cb8d-4c07-868f-c694c9a02888\") " pod="kube-system/cilium-btzcd" Sep 13 00:18:52.051862 kubelet[2614]: I0913 00:18:52.051768 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f9c5ea1-cb8d-4c07-868f-c694c9a02888-lib-modules\") pod \"cilium-btzcd\" (UID: \"0f9c5ea1-cb8d-4c07-868f-c694c9a02888\") " pod="kube-system/cilium-btzcd" Sep 13 00:18:52.051862 kubelet[2614]: I0913 00:18:52.051783 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0f9c5ea1-cb8d-4c07-868f-c694c9a02888-clustermesh-secrets\") pod \"cilium-btzcd\" (UID: \"0f9c5ea1-cb8d-4c07-868f-c694c9a02888\") " pod="kube-system/cilium-btzcd" Sep 13 00:18:52.051862 kubelet[2614]: I0913 00:18:52.051796 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0f9c5ea1-cb8d-4c07-868f-c694c9a02888-host-proc-sys-kernel\") pod \"cilium-btzcd\" (UID: \"0f9c5ea1-cb8d-4c07-868f-c694c9a02888\") " pod="kube-system/cilium-btzcd" Sep 13 00:18:52.051979 kubelet[2614]: I0913 00:18:52.051811 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0f9c5ea1-cb8d-4c07-868f-c694c9a02888-cilium-cgroup\") pod \"cilium-btzcd\" (UID: \"0f9c5ea1-cb8d-4c07-868f-c694c9a02888\") " pod="kube-system/cilium-btzcd" Sep 13 00:18:52.051979 kubelet[2614]: I0913 00:18:52.051826 2614 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0f9c5ea1-cb8d-4c07-868f-c694c9a02888-bpf-maps\") pod \"cilium-btzcd\" (UID: \"0f9c5ea1-cb8d-4c07-868f-c694c9a02888\") " pod="kube-system/cilium-btzcd" Sep 13 00:18:52.092383 sshd[4429]: pam_unix(sshd:session): session closed for user core Sep 13 00:18:52.101786 systemd[1]: Started sshd@25-10.0.0.134:22-10.0.0.1:41736.service - OpenSSH per-connection server daemon (10.0.0.1:41736). Sep 13 00:18:52.102184 systemd[1]: sshd@24-10.0.0.134:22-10.0.0.1:41726.service: Deactivated successfully. Sep 13 00:18:52.105744 systemd-logind[1524]: Session 25 logged out. Waiting for processes to exit. Sep 13 00:18:52.105923 systemd[1]: session-25.scope: Deactivated successfully. Sep 13 00:18:52.107038 systemd-logind[1524]: Removed session 25. Sep 13 00:18:52.132987 sshd[4438]: Accepted publickey for core from 10.0.0.1 port 41736 ssh2: RSA SHA256:pv+Vh8Ko8wdl4K2IVWbNSELsO8ydI+ThTypq2OJGNCw Sep 13 00:18:52.134347 sshd[4438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 13 00:18:52.137985 systemd-logind[1524]: New session 26 of user core. Sep 13 00:18:52.150756 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 13 00:18:52.288119 kubelet[2614]: E0913 00:18:52.288067 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:52.288834 containerd[1537]: time="2025-09-13T00:18:52.288570883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-btzcd,Uid:0f9c5ea1-cb8d-4c07-868f-c694c9a02888,Namespace:kube-system,Attempt:0,}" Sep 13 00:18:52.309230 containerd[1537]: time="2025-09-13T00:18:52.309135829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Sep 13 00:18:52.309230 containerd[1537]: time="2025-09-13T00:18:52.309195710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Sep 13 00:18:52.309230 containerd[1537]: time="2025-09-13T00:18:52.309210950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:18:52.309548 containerd[1537]: time="2025-09-13T00:18:52.309450990Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Sep 13 00:18:52.345066 containerd[1537]: time="2025-09-13T00:18:52.345029516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-btzcd,Uid:0f9c5ea1-cb8d-4c07-868f-c694c9a02888,Namespace:kube-system,Attempt:0,} returns sandbox id \"718c85af635476aaccafb97a9a8220df5d838c73ebe77b7a7f8e659683b73a93\"" Sep 13 00:18:52.345852 kubelet[2614]: E0913 00:18:52.345704 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:52.349213 containerd[1537]: time="2025-09-13T00:18:52.349116321Z" level=info msg="CreateContainer within sandbox \"718c85af635476aaccafb97a9a8220df5d838c73ebe77b7a7f8e659683b73a93\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 13 00:18:52.363846 containerd[1537]: time="2025-09-13T00:18:52.363791020Z" level=info msg="CreateContainer within sandbox \"718c85af635476aaccafb97a9a8220df5d838c73ebe77b7a7f8e659683b73a93\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bdbf6a455f05907f7a6aa475f19f078d4637b604885266d793a2c586abf3586e\"" Sep 13 00:18:52.365369 containerd[1537]: time="2025-09-13T00:18:52.364333020Z" level=info msg="StartContainer for \"bdbf6a455f05907f7a6aa475f19f078d4637b604885266d793a2c586abf3586e\"" Sep 13 00:18:52.406096 containerd[1537]: time="2025-09-13T00:18:52.406049634Z" level=info msg="StartContainer for \"bdbf6a455f05907f7a6aa475f19f078d4637b604885266d793a2c586abf3586e\" returns successfully" Sep 13 00:18:52.438775 containerd[1537]: time="2025-09-13T00:18:52.438701836Z" level=info msg="shim disconnected" id=bdbf6a455f05907f7a6aa475f19f078d4637b604885266d793a2c586abf3586e namespace=k8s.io Sep 13 00:18:52.438775 containerd[1537]: time="2025-09-13T00:18:52.438762996Z" level=warning msg="cleaning up after shim disconnected" id=bdbf6a455f05907f7a6aa475f19f078d4637b604885266d793a2c586abf3586e namespace=k8s.io Sep 13 00:18:52.438775 containerd[1537]: time="2025-09-13T00:18:52.438772236Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:18:53.027369 kubelet[2614]: E0913 00:18:53.027288 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:53.272121 kubelet[2614]: E0913 00:18:53.271991 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:53.275404 containerd[1537]: time="2025-09-13T00:18:53.275271661Z" level=info msg="CreateContainer within sandbox \"718c85af635476aaccafb97a9a8220df5d838c73ebe77b7a7f8e659683b73a93\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 13 00:18:53.289578 containerd[1537]: time="2025-09-13T00:18:53.289458559Z" level=info msg="CreateContainer within sandbox \"718c85af635476aaccafb97a9a8220df5d838c73ebe77b7a7f8e659683b73a93\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5cc95a4a51301c2854558665293b422dcce2d790f8a6f4d7f974a9d257016c76\"" Sep 13 00:18:53.290737 containerd[1537]: time="2025-09-13T00:18:53.290264880Z" level=info msg="StartContainer for \"5cc95a4a51301c2854558665293b422dcce2d790f8a6f4d7f974a9d257016c76\"" Sep 13 00:18:53.338806 containerd[1537]: time="2025-09-13T00:18:53.338758941Z" level=info msg="StartContainer for \"5cc95a4a51301c2854558665293b422dcce2d790f8a6f4d7f974a9d257016c76\" returns successfully" Sep 13 00:18:53.364963 containerd[1537]: time="2025-09-13T00:18:53.364906054Z" level=info msg="shim disconnected" id=5cc95a4a51301c2854558665293b422dcce2d790f8a6f4d7f974a9d257016c76 namespace=k8s.io Sep 13 00:18:53.365373 containerd[1537]: time="2025-09-13T00:18:53.365005214Z" level=warning msg="cleaning up after shim disconnected" id=5cc95a4a51301c2854558665293b422dcce2d790f8a6f4d7f974a9d257016c76 namespace=k8s.io Sep 13 00:18:53.365373 containerd[1537]: time="2025-09-13T00:18:53.365025574Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:18:54.078950 kubelet[2614]: E0913 00:18:54.078902 2614 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 13 00:18:54.156473 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5cc95a4a51301c2854558665293b422dcce2d790f8a6f4d7f974a9d257016c76-rootfs.mount: Deactivated successfully. Sep 13 00:18:54.277989 kubelet[2614]: E0913 00:18:54.277949 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:54.281521 containerd[1537]: time="2025-09-13T00:18:54.280414916Z" level=info msg="CreateContainer within sandbox \"718c85af635476aaccafb97a9a8220df5d838c73ebe77b7a7f8e659683b73a93\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 13 00:18:54.295870 containerd[1537]: time="2025-09-13T00:18:54.295818895Z" level=info msg="CreateContainer within sandbox \"718c85af635476aaccafb97a9a8220df5d838c73ebe77b7a7f8e659683b73a93\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b651321927e2ee8b659c0c535a42f4d2ba7248abe78882d7cd30b3d9f3e78c96\"" Sep 13 00:18:54.298002 containerd[1537]: time="2025-09-13T00:18:54.297586497Z" level=info msg="StartContainer for \"b651321927e2ee8b659c0c535a42f4d2ba7248abe78882d7cd30b3d9f3e78c96\"" Sep 13 00:18:54.349710 containerd[1537]: time="2025-09-13T00:18:54.349607681Z" level=info msg="StartContainer for \"b651321927e2ee8b659c0c535a42f4d2ba7248abe78882d7cd30b3d9f3e78c96\" returns successfully" Sep 13 00:18:54.368868 containerd[1537]: time="2025-09-13T00:18:54.368816544Z" level=info msg="shim disconnected" id=b651321927e2ee8b659c0c535a42f4d2ba7248abe78882d7cd30b3d9f3e78c96 namespace=k8s.io Sep 13 00:18:54.369238 containerd[1537]: time="2025-09-13T00:18:54.369102985Z" level=warning msg="cleaning up after shim disconnected" id=b651321927e2ee8b659c0c535a42f4d2ba7248abe78882d7cd30b3d9f3e78c96 namespace=k8s.io Sep 13 00:18:54.369238 containerd[1537]: time="2025-09-13T00:18:54.369120185Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:18:55.026864 kubelet[2614]: E0913 00:18:55.026828 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:55.156574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b651321927e2ee8b659c0c535a42f4d2ba7248abe78882d7cd30b3d9f3e78c96-rootfs.mount: Deactivated successfully. Sep 13 00:18:55.281937 kubelet[2614]: E0913 00:18:55.281693 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:55.285939 containerd[1537]: time="2025-09-13T00:18:55.285894944Z" level=info msg="CreateContainer within sandbox \"718c85af635476aaccafb97a9a8220df5d838c73ebe77b7a7f8e659683b73a93\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 13 00:18:55.296691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1018245645.mount: Deactivated successfully. Sep 13 00:18:55.297745 containerd[1537]: time="2025-09-13T00:18:55.297604558Z" level=info msg="CreateContainer within sandbox \"718c85af635476aaccafb97a9a8220df5d838c73ebe77b7a7f8e659683b73a93\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cacd75c98d833aca9299c32e5e25a1faf6e19fe34b2696e5b4afc59e15e24338\"" Sep 13 00:18:55.298210 containerd[1537]: time="2025-09-13T00:18:55.298185559Z" level=info msg="StartContainer for \"cacd75c98d833aca9299c32e5e25a1faf6e19fe34b2696e5b4afc59e15e24338\"" Sep 13 00:18:55.338927 containerd[1537]: time="2025-09-13T00:18:55.338883688Z" level=info msg="StartContainer for \"cacd75c98d833aca9299c32e5e25a1faf6e19fe34b2696e5b4afc59e15e24338\" returns successfully" Sep 13 00:18:55.356749 containerd[1537]: time="2025-09-13T00:18:55.356571269Z" level=info msg="shim disconnected" id=cacd75c98d833aca9299c32e5e25a1faf6e19fe34b2696e5b4afc59e15e24338 namespace=k8s.io Sep 13 00:18:55.356749 containerd[1537]: time="2025-09-13T00:18:55.356620589Z" level=warning msg="cleaning up after shim disconnected" id=cacd75c98d833aca9299c32e5e25a1faf6e19fe34b2696e5b4afc59e15e24338 namespace=k8s.io Sep 13 00:18:55.356749 containerd[1537]: time="2025-09-13T00:18:55.356628909Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 13 00:18:56.156627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cacd75c98d833aca9299c32e5e25a1faf6e19fe34b2696e5b4afc59e15e24338-rootfs.mount: Deactivated successfully. Sep 13 00:18:56.285673 kubelet[2614]: E0913 00:18:56.285605 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:56.288376 containerd[1537]: time="2025-09-13T00:18:56.288336583Z" level=info msg="CreateContainer within sandbox \"718c85af635476aaccafb97a9a8220df5d838c73ebe77b7a7f8e659683b73a93\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 13 00:18:56.313529 containerd[1537]: time="2025-09-13T00:18:56.313465173Z" level=info msg="CreateContainer within sandbox \"718c85af635476aaccafb97a9a8220df5d838c73ebe77b7a7f8e659683b73a93\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"05b5d0283276182d6b32bbe47c192f87ea36716eeb8d3cc36fea551b52e6cecb\"" Sep 13 00:18:56.316059 containerd[1537]: time="2025-09-13T00:18:56.314945495Z" level=info msg="StartContainer for \"05b5d0283276182d6b32bbe47c192f87ea36716eeb8d3cc36fea551b52e6cecb\"" Sep 13 00:18:56.359129 containerd[1537]: time="2025-09-13T00:18:56.359086747Z" level=info msg="StartContainer for \"05b5d0283276182d6b32bbe47c192f87ea36716eeb8d3cc36fea551b52e6cecb\" returns successfully" Sep 13 00:18:56.578134 kubelet[2614]: I0913 00:18:56.578024 2614 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-13T00:18:56Z","lastTransitionTime":"2025-09-13T00:18:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 13 00:18:56.611608 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 13 00:18:57.292471 kubelet[2614]: E0913 00:18:57.292027 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:57.311331 kubelet[2614]: I0913 00:18:57.311024 2614 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-btzcd" podStartSLOduration=6.311003781 podStartE2EDuration="6.311003781s" podCreationTimestamp="2025-09-13 00:18:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-13 00:18:57.308661779 +0000 UTC m=+83.364611718" watchObservedRunningTime="2025-09-13 00:18:57.311003781 +0000 UTC m=+83.366953720" Sep 13 00:18:58.029341 kubelet[2614]: E0913 00:18:58.029307 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:58.302048 kubelet[2614]: E0913 00:18:58.301918 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:18:59.540977 systemd-networkd[1228]: lxc_health: Link UP Sep 13 00:18:59.548703 systemd-networkd[1228]: lxc_health: Gained carrier Sep 13 00:19:00.290196 kubelet[2614]: E0913 00:19:00.290145 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:19:00.307460 kubelet[2614]: E0913 00:19:00.307411 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:19:01.016695 systemd-networkd[1228]: lxc_health: Gained IPv6LL Sep 13 00:19:01.310472 kubelet[2614]: E0913 00:19:01.310223 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:19:02.734469 systemd[1]: run-containerd-runc-k8s.io-05b5d0283276182d6b32bbe47c192f87ea36716eeb8d3cc36fea551b52e6cecb-runc.oWJp8n.mount: Deactivated successfully. Sep 13 00:19:04.027046 kubelet[2614]: E0913 00:19:04.026992 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 13 00:19:04.968325 sshd[4438]: pam_unix(sshd:session): session closed for user core Sep 13 00:19:04.972027 systemd[1]: sshd@25-10.0.0.134:22-10.0.0.1:41736.service: Deactivated successfully. Sep 13 00:19:04.973828 systemd-logind[1524]: Session 26 logged out. Waiting for processes to exit. Sep 13 00:19:04.974323 systemd[1]: session-26.scope: Deactivated successfully. Sep 13 00:19:04.975323 systemd-logind[1524]: Removed session 26. Sep 13 00:19:05.026657 kubelet[2614]: E0913 00:19:05.026626 2614 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"