Aug 19 00:13:40.886714 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Aug 19 00:13:40.886736 kernel: Linux version 6.12.41-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Mon Aug 18 22:15:14 -00 2025 Aug 19 00:13:40.886745 kernel: KASLR enabled Aug 19 00:13:40.886751 kernel: efi: EFI v2.7 by EDK II Aug 19 00:13:40.886757 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Aug 19 00:13:40.886762 kernel: random: crng init done Aug 19 00:13:40.886769 kernel: secureboot: Secure boot disabled Aug 19 00:13:40.886776 kernel: ACPI: Early table checksum verification disabled Aug 19 00:13:40.886781 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Aug 19 00:13:40.886789 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Aug 19 00:13:40.886810 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:13:40.886816 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:13:40.886822 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:13:40.886828 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:13:40.886835 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:13:40.886843 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:13:40.886850 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:13:40.886856 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:13:40.886862 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 19 00:13:40.886868 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Aug 19 00:13:40.886875 kernel: ACPI: Use ACPI SPCR as default console: Yes Aug 19 00:13:40.886881 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Aug 19 00:13:40.886887 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Aug 19 00:13:40.886893 kernel: Zone ranges: Aug 19 00:13:40.886899 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Aug 19 00:13:40.886906 kernel: DMA32 empty Aug 19 00:13:40.886913 kernel: Normal empty Aug 19 00:13:40.886918 kernel: Device empty Aug 19 00:13:40.886925 kernel: Movable zone start for each node Aug 19 00:13:40.886931 kernel: Early memory node ranges Aug 19 00:13:40.886937 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Aug 19 00:13:40.886943 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Aug 19 00:13:40.886949 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Aug 19 00:13:40.886956 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Aug 19 00:13:40.886962 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Aug 19 00:13:40.886968 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Aug 19 00:13:40.886974 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Aug 19 00:13:40.886982 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Aug 19 00:13:40.886988 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Aug 19 00:13:40.886994 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Aug 19 00:13:40.887003 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Aug 19 00:13:40.887010 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Aug 19 00:13:40.887016 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Aug 19 00:13:40.887024 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Aug 19 00:13:40.887031 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Aug 19 00:13:40.887038 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Aug 19 00:13:40.887044 kernel: psci: probing for conduit method from ACPI. Aug 19 00:13:40.887050 kernel: psci: PSCIv1.1 detected in firmware. Aug 19 00:13:40.887057 kernel: psci: Using standard PSCI v0.2 function IDs Aug 19 00:13:40.887063 kernel: psci: Trusted OS migration not required Aug 19 00:13:40.887070 kernel: psci: SMC Calling Convention v1.1 Aug 19 00:13:40.887076 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Aug 19 00:13:40.887083 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Aug 19 00:13:40.887091 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Aug 19 00:13:40.887097 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Aug 19 00:13:40.887122 kernel: Detected PIPT I-cache on CPU0 Aug 19 00:13:40.887131 kernel: CPU features: detected: GIC system register CPU interface Aug 19 00:13:40.887137 kernel: CPU features: detected: Spectre-v4 Aug 19 00:13:40.887144 kernel: CPU features: detected: Spectre-BHB Aug 19 00:13:40.887150 kernel: CPU features: kernel page table isolation forced ON by KASLR Aug 19 00:13:40.887157 kernel: CPU features: detected: Kernel page table isolation (KPTI) Aug 19 00:13:40.887164 kernel: CPU features: detected: ARM erratum 1418040 Aug 19 00:13:40.887170 kernel: CPU features: detected: SSBS not fully self-synchronizing Aug 19 00:13:40.887177 kernel: alternatives: applying boot alternatives Aug 19 00:13:40.887184 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a868ccde263e96e0a18737fdbf04ca04bbf30dfe23963f1ae3994966e8fc9468 Aug 19 00:13:40.887193 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 19 00:13:40.887199 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 19 00:13:40.887206 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 19 00:13:40.887213 kernel: Fallback order for Node 0: 0 Aug 19 00:13:40.887219 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Aug 19 00:13:40.887225 kernel: Policy zone: DMA Aug 19 00:13:40.887237 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 19 00:13:40.887244 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Aug 19 00:13:40.887251 kernel: software IO TLB: area num 4. Aug 19 00:13:40.887258 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Aug 19 00:13:40.887264 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Aug 19 00:13:40.887273 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Aug 19 00:13:40.887279 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 19 00:13:40.887286 kernel: rcu: RCU event tracing is enabled. Aug 19 00:13:40.887293 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Aug 19 00:13:40.887300 kernel: Trampoline variant of Tasks RCU enabled. Aug 19 00:13:40.887306 kernel: Tracing variant of Tasks RCU enabled. Aug 19 00:13:40.887313 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 19 00:13:40.887319 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Aug 19 00:13:40.887326 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 00:13:40.887333 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Aug 19 00:13:40.887340 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Aug 19 00:13:40.887347 kernel: GICv3: 256 SPIs implemented Aug 19 00:13:40.887354 kernel: GICv3: 0 Extended SPIs implemented Aug 19 00:13:40.887361 kernel: Root IRQ handler: gic_handle_irq Aug 19 00:13:40.887367 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Aug 19 00:13:40.887373 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Aug 19 00:13:40.887380 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Aug 19 00:13:40.887386 kernel: ITS [mem 0x08080000-0x0809ffff] Aug 19 00:13:40.887393 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Aug 19 00:13:40.887400 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Aug 19 00:13:40.887407 kernel: GICv3: using LPI property table @0x0000000040130000 Aug 19 00:13:40.887413 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Aug 19 00:13:40.887420 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 19 00:13:40.887428 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:13:40.887435 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Aug 19 00:13:40.887442 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Aug 19 00:13:40.887448 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Aug 19 00:13:40.887455 kernel: arm-pv: using stolen time PV Aug 19 00:13:40.887462 kernel: Console: colour dummy device 80x25 Aug 19 00:13:40.887469 kernel: ACPI: Core revision 20240827 Aug 19 00:13:40.887476 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Aug 19 00:13:40.887483 kernel: pid_max: default: 32768 minimum: 301 Aug 19 00:13:40.887489 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 19 00:13:40.887498 kernel: landlock: Up and running. Aug 19 00:13:40.887505 kernel: SELinux: Initializing. Aug 19 00:13:40.887512 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 00:13:40.887519 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 19 00:13:40.887525 kernel: rcu: Hierarchical SRCU implementation. Aug 19 00:13:40.887532 kernel: rcu: Max phase no-delay instances is 400. Aug 19 00:13:40.887539 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 19 00:13:40.887546 kernel: Remapping and enabling EFI services. Aug 19 00:13:40.887552 kernel: smp: Bringing up secondary CPUs ... Aug 19 00:13:40.887565 kernel: Detected PIPT I-cache on CPU1 Aug 19 00:13:40.887572 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Aug 19 00:13:40.887579 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Aug 19 00:13:40.887588 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:13:40.887595 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Aug 19 00:13:40.887602 kernel: Detected PIPT I-cache on CPU2 Aug 19 00:13:40.887609 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Aug 19 00:13:40.887616 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Aug 19 00:13:40.887625 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:13:40.887632 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Aug 19 00:13:40.887639 kernel: Detected PIPT I-cache on CPU3 Aug 19 00:13:40.887646 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Aug 19 00:13:40.887653 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Aug 19 00:13:40.887661 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Aug 19 00:13:40.887667 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Aug 19 00:13:40.887674 kernel: smp: Brought up 1 node, 4 CPUs Aug 19 00:13:40.887682 kernel: SMP: Total of 4 processors activated. Aug 19 00:13:40.887691 kernel: CPU: All CPU(s) started at EL1 Aug 19 00:13:40.887698 kernel: CPU features: detected: 32-bit EL0 Support Aug 19 00:13:40.887705 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Aug 19 00:13:40.887712 kernel: CPU features: detected: Common not Private translations Aug 19 00:13:40.887719 kernel: CPU features: detected: CRC32 instructions Aug 19 00:13:40.887726 kernel: CPU features: detected: Enhanced Virtualization Traps Aug 19 00:13:40.887734 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Aug 19 00:13:40.887741 kernel: CPU features: detected: LSE atomic instructions Aug 19 00:13:40.887748 kernel: CPU features: detected: Privileged Access Never Aug 19 00:13:40.887755 kernel: CPU features: detected: RAS Extension Support Aug 19 00:13:40.887764 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Aug 19 00:13:40.887770 kernel: alternatives: applying system-wide alternatives Aug 19 00:13:40.887777 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Aug 19 00:13:40.887785 kernel: Memory: 2424544K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 125408K reserved, 16384K cma-reserved) Aug 19 00:13:40.887793 kernel: devtmpfs: initialized Aug 19 00:13:40.887800 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 19 00:13:40.887807 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Aug 19 00:13:40.887814 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Aug 19 00:13:40.887822 kernel: 0 pages in range for non-PLT usage Aug 19 00:13:40.887829 kernel: 508576 pages in range for PLT usage Aug 19 00:13:40.887836 kernel: pinctrl core: initialized pinctrl subsystem Aug 19 00:13:40.887843 kernel: SMBIOS 3.0.0 present. Aug 19 00:13:40.887850 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Aug 19 00:13:40.887857 kernel: DMI: Memory slots populated: 1/1 Aug 19 00:13:40.887864 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 19 00:13:40.887871 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Aug 19 00:13:40.887878 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Aug 19 00:13:40.887900 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Aug 19 00:13:40.887908 kernel: audit: initializing netlink subsys (disabled) Aug 19 00:13:40.887915 kernel: audit: type=2000 audit(0.021:1): state=initialized audit_enabled=0 res=1 Aug 19 00:13:40.887922 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 19 00:13:40.887930 kernel: cpuidle: using governor menu Aug 19 00:13:40.887937 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Aug 19 00:13:40.887945 kernel: ASID allocator initialised with 32768 entries Aug 19 00:13:40.887952 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 19 00:13:40.887959 kernel: Serial: AMBA PL011 UART driver Aug 19 00:13:40.887968 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 19 00:13:40.887975 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Aug 19 00:13:40.887982 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Aug 19 00:13:40.887989 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Aug 19 00:13:40.887996 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 19 00:13:40.888003 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Aug 19 00:13:40.888010 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Aug 19 00:13:40.888018 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Aug 19 00:13:40.888025 kernel: ACPI: Added _OSI(Module Device) Aug 19 00:13:40.888032 kernel: ACPI: Added _OSI(Processor Device) Aug 19 00:13:40.888040 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 19 00:13:40.888048 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 19 00:13:40.888055 kernel: ACPI: Interpreter enabled Aug 19 00:13:40.888062 kernel: ACPI: Using GIC for interrupt routing Aug 19 00:13:40.888069 kernel: ACPI: MCFG table detected, 1 entries Aug 19 00:13:40.888076 kernel: ACPI: CPU0 has been hot-added Aug 19 00:13:40.888083 kernel: ACPI: CPU1 has been hot-added Aug 19 00:13:40.888090 kernel: ACPI: CPU2 has been hot-added Aug 19 00:13:40.888097 kernel: ACPI: CPU3 has been hot-added Aug 19 00:13:40.888279 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Aug 19 00:13:40.888294 kernel: printk: legacy console [ttyAMA0] enabled Aug 19 00:13:40.888302 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 19 00:13:40.888450 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 19 00:13:40.888520 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Aug 19 00:13:40.888583 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Aug 19 00:13:40.888649 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Aug 19 00:13:40.888715 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Aug 19 00:13:40.888724 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Aug 19 00:13:40.888732 kernel: PCI host bridge to bus 0000:00 Aug 19 00:13:40.888799 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Aug 19 00:13:40.888854 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Aug 19 00:13:40.888914 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Aug 19 00:13:40.888967 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 19 00:13:40.889060 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Aug 19 00:13:40.889156 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Aug 19 00:13:40.889225 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Aug 19 00:13:40.889952 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Aug 19 00:13:40.890015 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Aug 19 00:13:40.890074 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Aug 19 00:13:40.890162 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Aug 19 00:13:40.890239 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Aug 19 00:13:40.890306 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Aug 19 00:13:40.890375 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Aug 19 00:13:40.890429 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Aug 19 00:13:40.890438 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Aug 19 00:13:40.890446 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Aug 19 00:13:40.890453 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Aug 19 00:13:40.890463 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Aug 19 00:13:40.890471 kernel: iommu: Default domain type: Translated Aug 19 00:13:40.890478 kernel: iommu: DMA domain TLB invalidation policy: strict mode Aug 19 00:13:40.890485 kernel: efivars: Registered efivars operations Aug 19 00:13:40.890493 kernel: vgaarb: loaded Aug 19 00:13:40.890500 kernel: clocksource: Switched to clocksource arch_sys_counter Aug 19 00:13:40.890508 kernel: VFS: Disk quotas dquot_6.6.0 Aug 19 00:13:40.890516 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 19 00:13:40.890523 kernel: pnp: PnP ACPI init Aug 19 00:13:40.890594 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Aug 19 00:13:40.890604 kernel: pnp: PnP ACPI: found 1 devices Aug 19 00:13:40.890612 kernel: NET: Registered PF_INET protocol family Aug 19 00:13:40.890619 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 19 00:13:40.890626 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 19 00:13:40.890633 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 19 00:13:40.890640 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 19 00:13:40.890647 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 19 00:13:40.890656 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 19 00:13:40.890663 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 00:13:40.890670 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 19 00:13:40.890677 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 19 00:13:40.890684 kernel: PCI: CLS 0 bytes, default 64 Aug 19 00:13:40.890692 kernel: kvm [1]: HYP mode not available Aug 19 00:13:40.890699 kernel: Initialise system trusted keyrings Aug 19 00:13:40.890706 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 19 00:13:40.890713 kernel: Key type asymmetric registered Aug 19 00:13:40.890720 kernel: Asymmetric key parser 'x509' registered Aug 19 00:13:40.890728 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Aug 19 00:13:40.890736 kernel: io scheduler mq-deadline registered Aug 19 00:13:40.890743 kernel: io scheduler kyber registered Aug 19 00:13:40.890750 kernel: io scheduler bfq registered Aug 19 00:13:40.890757 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Aug 19 00:13:40.890764 kernel: ACPI: button: Power Button [PWRB] Aug 19 00:13:40.890772 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Aug 19 00:13:40.890847 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Aug 19 00:13:40.890858 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 19 00:13:40.890867 kernel: thunder_xcv, ver 1.0 Aug 19 00:13:40.890874 kernel: thunder_bgx, ver 1.0 Aug 19 00:13:40.890881 kernel: nicpf, ver 1.0 Aug 19 00:13:40.890888 kernel: nicvf, ver 1.0 Aug 19 00:13:40.890958 kernel: rtc-efi rtc-efi.0: registered as rtc0 Aug 19 00:13:40.891015 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-08-19T00:13:40 UTC (1755562420) Aug 19 00:13:40.891024 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 19 00:13:40.891032 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Aug 19 00:13:40.891041 kernel: watchdog: NMI not fully supported Aug 19 00:13:40.891048 kernel: watchdog: Hard watchdog permanently disabled Aug 19 00:13:40.891055 kernel: NET: Registered PF_INET6 protocol family Aug 19 00:13:40.891062 kernel: Segment Routing with IPv6 Aug 19 00:13:40.891069 kernel: In-situ OAM (IOAM) with IPv6 Aug 19 00:13:40.891076 kernel: NET: Registered PF_PACKET protocol family Aug 19 00:13:40.891083 kernel: Key type dns_resolver registered Aug 19 00:13:40.891090 kernel: registered taskstats version 1 Aug 19 00:13:40.891097 kernel: Loading compiled-in X.509 certificates Aug 19 00:13:40.891128 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.41-flatcar: becc5a61d1c5dcbcd174f4649c64b863031dbaa8' Aug 19 00:13:40.891137 kernel: Demotion targets for Node 0: null Aug 19 00:13:40.891144 kernel: Key type .fscrypt registered Aug 19 00:13:40.891151 kernel: Key type fscrypt-provisioning registered Aug 19 00:13:40.891158 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 19 00:13:40.891165 kernel: ima: Allocated hash algorithm: sha1 Aug 19 00:13:40.891173 kernel: ima: No architecture policies found Aug 19 00:13:40.891180 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Aug 19 00:13:40.891189 kernel: clk: Disabling unused clocks Aug 19 00:13:40.891197 kernel: PM: genpd: Disabling unused power domains Aug 19 00:13:40.891204 kernel: Warning: unable to open an initial console. Aug 19 00:13:40.891211 kernel: Freeing unused kernel memory: 38912K Aug 19 00:13:40.891218 kernel: Run /init as init process Aug 19 00:13:40.891225 kernel: with arguments: Aug 19 00:13:40.891239 kernel: /init Aug 19 00:13:40.891246 kernel: with environment: Aug 19 00:13:40.891253 kernel: HOME=/ Aug 19 00:13:40.891260 kernel: TERM=linux Aug 19 00:13:40.891269 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 19 00:13:40.891277 systemd[1]: Successfully made /usr/ read-only. Aug 19 00:13:40.891287 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 00:13:40.891295 systemd[1]: Detected virtualization kvm. Aug 19 00:13:40.891302 systemd[1]: Detected architecture arm64. Aug 19 00:13:40.891309 systemd[1]: Running in initrd. Aug 19 00:13:40.891316 systemd[1]: No hostname configured, using default hostname. Aug 19 00:13:40.891326 systemd[1]: Hostname set to . Aug 19 00:13:40.891333 systemd[1]: Initializing machine ID from VM UUID. Aug 19 00:13:40.891341 systemd[1]: Queued start job for default target initrd.target. Aug 19 00:13:40.891348 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:13:40.891356 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:13:40.891364 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 19 00:13:40.891372 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 00:13:40.891380 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 19 00:13:40.891389 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 19 00:13:40.891398 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 19 00:13:40.891406 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 19 00:13:40.891413 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:13:40.891421 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:13:40.891429 systemd[1]: Reached target paths.target - Path Units. Aug 19 00:13:40.891437 systemd[1]: Reached target slices.target - Slice Units. Aug 19 00:13:40.891446 systemd[1]: Reached target swap.target - Swaps. Aug 19 00:13:40.891453 systemd[1]: Reached target timers.target - Timer Units. Aug 19 00:13:40.891461 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 00:13:40.891468 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 00:13:40.891476 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 19 00:13:40.891483 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 19 00:13:40.891491 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:13:40.891498 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 00:13:40.891508 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:13:40.891515 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 00:13:40.891523 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 19 00:13:40.891530 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 00:13:40.891538 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 19 00:13:40.891546 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 19 00:13:40.891553 systemd[1]: Starting systemd-fsck-usr.service... Aug 19 00:13:40.891561 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 00:13:40.891569 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 00:13:40.891578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:13:40.891585 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 19 00:13:40.891594 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:13:40.891601 systemd[1]: Finished systemd-fsck-usr.service. Aug 19 00:13:40.891632 systemd-journald[245]: Collecting audit messages is disabled. Aug 19 00:13:40.891652 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 00:13:40.891661 systemd-journald[245]: Journal started Aug 19 00:13:40.891680 systemd-journald[245]: Runtime Journal (/run/log/journal/673c8a83742c4e8fbe354b7c174f8f2d) is 6M, max 48.5M, 42.4M free. Aug 19 00:13:40.884073 systemd-modules-load[246]: Inserted module 'overlay' Aug 19 00:13:40.894209 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 00:13:40.900581 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:13:40.902466 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 19 00:13:40.904641 systemd-modules-load[246]: Inserted module 'br_netfilter' Aug 19 00:13:40.905876 kernel: Bridge firewalling registered Aug 19 00:13:40.905558 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 00:13:40.909204 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 00:13:40.913994 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 19 00:13:40.916147 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:13:40.918312 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 00:13:40.934189 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 00:13:40.941394 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:13:40.943046 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 19 00:13:40.945530 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:13:40.947009 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:13:40.951793 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 00:13:40.955642 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 00:13:40.968744 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 19 00:13:40.983700 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a868ccde263e96e0a18737fdbf04ca04bbf30dfe23963f1ae3994966e8fc9468 Aug 19 00:13:40.997979 systemd-resolved[287]: Positive Trust Anchors: Aug 19 00:13:40.997998 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 00:13:40.998029 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 00:13:41.003035 systemd-resolved[287]: Defaulting to hostname 'linux'. Aug 19 00:13:41.004035 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 00:13:41.008678 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:13:41.068151 kernel: SCSI subsystem initialized Aug 19 00:13:41.073130 kernel: Loading iSCSI transport class v2.0-870. Aug 19 00:13:41.084137 kernel: iscsi: registered transport (tcp) Aug 19 00:13:41.098143 kernel: iscsi: registered transport (qla4xxx) Aug 19 00:13:41.098202 kernel: QLogic iSCSI HBA Driver Aug 19 00:13:41.118313 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 00:13:41.135915 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:13:41.137723 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 00:13:41.212163 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 19 00:13:41.214247 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 19 00:13:41.284147 kernel: raid6: neonx8 gen() 15777 MB/s Aug 19 00:13:41.301130 kernel: raid6: neonx4 gen() 15791 MB/s Aug 19 00:13:41.318137 kernel: raid6: neonx2 gen() 13261 MB/s Aug 19 00:13:41.335128 kernel: raid6: neonx1 gen() 10460 MB/s Aug 19 00:13:41.352131 kernel: raid6: int64x8 gen() 6899 MB/s Aug 19 00:13:41.369132 kernel: raid6: int64x4 gen() 7349 MB/s Aug 19 00:13:41.386132 kernel: raid6: int64x2 gen() 6099 MB/s Aug 19 00:13:41.403400 kernel: raid6: int64x1 gen() 5037 MB/s Aug 19 00:13:41.403445 kernel: raid6: using algorithm neonx4 gen() 15791 MB/s Aug 19 00:13:41.421389 kernel: raid6: .... xor() 12319 MB/s, rmw enabled Aug 19 00:13:41.421436 kernel: raid6: using neon recovery algorithm Aug 19 00:13:41.427642 kernel: xor: measuring software checksum speed Aug 19 00:13:41.427671 kernel: 8regs : 21636 MB/sec Aug 19 00:13:41.427680 kernel: 32regs : 21664 MB/sec Aug 19 00:13:41.428329 kernel: arm64_neon : 27955 MB/sec Aug 19 00:13:41.428344 kernel: xor: using function: arm64_neon (27955 MB/sec) Aug 19 00:13:41.496137 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 19 00:13:41.505798 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 19 00:13:41.508844 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:13:41.542301 systemd-udevd[499]: Using default interface naming scheme 'v255'. Aug 19 00:13:41.546427 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:13:41.548568 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 19 00:13:41.574741 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Aug 19 00:13:41.600785 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 00:13:41.603306 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 00:13:41.656482 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:13:41.659937 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 19 00:13:41.704141 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Aug 19 00:13:41.710310 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Aug 19 00:13:41.717480 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 19 00:13:41.717533 kernel: GPT:9289727 != 19775487 Aug 19 00:13:41.718616 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 19 00:13:41.720707 kernel: GPT:9289727 != 19775487 Aug 19 00:13:41.720755 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 19 00:13:41.720766 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 00:13:41.720954 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 00:13:41.721085 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:13:41.725477 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:13:41.728344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:13:41.760703 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Aug 19 00:13:41.763489 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 19 00:13:41.764856 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:13:41.774263 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Aug 19 00:13:41.781343 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Aug 19 00:13:41.782650 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Aug 19 00:13:41.791580 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 00:13:41.792908 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 00:13:41.795133 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:13:41.797397 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 00:13:41.800245 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 19 00:13:41.802082 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 19 00:13:41.821296 disk-uuid[590]: Primary Header is updated. Aug 19 00:13:41.821296 disk-uuid[590]: Secondary Entries is updated. Aug 19 00:13:41.821296 disk-uuid[590]: Secondary Header is updated. Aug 19 00:13:41.824466 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 19 00:13:41.827243 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 00:13:42.836161 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Aug 19 00:13:42.836217 disk-uuid[595]: The operation has completed successfully. Aug 19 00:13:42.857790 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 19 00:13:42.859031 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 19 00:13:42.888777 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 19 00:13:42.923678 sh[610]: Success Aug 19 00:13:42.936133 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 19 00:13:42.936197 kernel: device-mapper: uevent: version 1.0.3 Aug 19 00:13:42.936209 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 19 00:13:42.946128 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Aug 19 00:13:42.976609 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 19 00:13:42.978625 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 19 00:13:42.989876 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 19 00:13:42.997261 kernel: BTRFS: device fsid 1e492084-d287-4a43-8dc6-ad086a072625 devid 1 transid 45 /dev/mapper/usr (253:0) scanned by mount (622) Aug 19 00:13:42.997296 kernel: BTRFS info (device dm-0): first mount of filesystem 1e492084-d287-4a43-8dc6-ad086a072625 Aug 19 00:13:42.998415 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:13:42.998432 kernel: BTRFS info (device dm-0): using free-space-tree Aug 19 00:13:43.002805 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 19 00:13:43.004278 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 19 00:13:43.005863 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 19 00:13:43.006713 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 19 00:13:43.008519 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 19 00:13:43.031139 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (653) Aug 19 00:13:43.031210 kernel: BTRFS info (device vda6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:13:43.033553 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:13:43.034411 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 00:13:43.043140 kernel: BTRFS info (device vda6): last unmount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:13:43.043276 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 19 00:13:43.045326 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 19 00:13:43.112502 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 00:13:43.117543 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 00:13:43.164262 systemd-networkd[794]: lo: Link UP Aug 19 00:13:43.164277 systemd-networkd[794]: lo: Gained carrier Aug 19 00:13:43.165164 systemd-networkd[794]: Enumeration completed Aug 19 00:13:43.165540 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 00:13:43.165812 systemd-networkd[794]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:13:43.165816 systemd-networkd[794]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 00:13:43.166471 systemd-networkd[794]: eth0: Link UP Aug 19 00:13:43.166786 systemd-networkd[794]: eth0: Gained carrier Aug 19 00:13:43.166797 systemd-networkd[794]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:13:43.167982 systemd[1]: Reached target network.target - Network. Aug 19 00:13:43.193173 systemd-networkd[794]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 00:13:43.205417 ignition[700]: Ignition 2.21.0 Aug 19 00:13:43.205431 ignition[700]: Stage: fetch-offline Aug 19 00:13:43.205468 ignition[700]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:13:43.205475 ignition[700]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:13:43.205674 ignition[700]: parsed url from cmdline: "" Aug 19 00:13:43.205678 ignition[700]: no config URL provided Aug 19 00:13:43.205682 ignition[700]: reading system config file "/usr/lib/ignition/user.ign" Aug 19 00:13:43.205690 ignition[700]: no config at "/usr/lib/ignition/user.ign" Aug 19 00:13:43.205710 ignition[700]: op(1): [started] loading QEMU firmware config module Aug 19 00:13:43.205715 ignition[700]: op(1): executing: "modprobe" "qemu_fw_cfg" Aug 19 00:13:43.219042 ignition[700]: op(1): [finished] loading QEMU firmware config module Aug 19 00:13:43.257513 ignition[700]: parsing config with SHA512: 67f91caf2d5587cfdc58699732b2e179eaebed17c7b852a409ed159f18a3dd281d1508dd47bf56925493584cce18b6547843c50624292db3e0c6f5b3d64044d8 Aug 19 00:13:43.262286 unknown[700]: fetched base config from "system" Aug 19 00:13:43.262300 unknown[700]: fetched user config from "qemu" Aug 19 00:13:43.262671 ignition[700]: fetch-offline: fetch-offline passed Aug 19 00:13:43.262725 ignition[700]: Ignition finished successfully Aug 19 00:13:43.267163 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 00:13:43.268824 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Aug 19 00:13:43.269765 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 19 00:13:43.303888 ignition[811]: Ignition 2.21.0 Aug 19 00:13:43.303903 ignition[811]: Stage: kargs Aug 19 00:13:43.304077 ignition[811]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:13:43.304086 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:13:43.305338 ignition[811]: kargs: kargs passed Aug 19 00:13:43.305393 ignition[811]: Ignition finished successfully Aug 19 00:13:43.309191 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 19 00:13:43.311292 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 19 00:13:43.341456 ignition[819]: Ignition 2.21.0 Aug 19 00:13:43.341472 ignition[819]: Stage: disks Aug 19 00:13:43.341609 ignition[819]: no configs at "/usr/lib/ignition/base.d" Aug 19 00:13:43.341618 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:13:43.344094 ignition[819]: disks: disks passed Aug 19 00:13:43.344209 ignition[819]: Ignition finished successfully Aug 19 00:13:43.346600 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 19 00:13:43.348332 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 19 00:13:43.350063 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 19 00:13:43.352172 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 00:13:43.354080 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 00:13:43.355879 systemd[1]: Reached target basic.target - Basic System. Aug 19 00:13:43.358554 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 19 00:13:43.384275 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 19 00:13:43.388377 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 19 00:13:43.390721 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 19 00:13:43.467130 kernel: EXT4-fs (vda9): mounted filesystem 593a9299-85f8-44ab-a00f-cf95b7233713 r/w with ordered data mode. Quota mode: none. Aug 19 00:13:43.468083 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 19 00:13:43.469366 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 19 00:13:43.471875 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 00:13:43.473744 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 19 00:13:43.474824 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 19 00:13:43.474870 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 19 00:13:43.474896 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 00:13:43.494257 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 19 00:13:43.497882 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 19 00:13:43.500265 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (838) Aug 19 00:13:43.502120 kernel: BTRFS info (device vda6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:13:43.502145 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:13:43.502156 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 00:13:43.504719 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 00:13:43.558249 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Aug 19 00:13:43.561558 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Aug 19 00:13:43.564720 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Aug 19 00:13:43.567842 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Aug 19 00:13:43.658955 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 19 00:13:43.661174 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 19 00:13:43.662867 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 19 00:13:43.683449 kernel: BTRFS info (device vda6): last unmount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:13:43.696125 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 19 00:13:43.703287 ignition[952]: INFO : Ignition 2.21.0 Aug 19 00:13:43.703287 ignition[952]: INFO : Stage: mount Aug 19 00:13:43.703287 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:13:43.703287 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:13:43.709369 ignition[952]: INFO : mount: mount passed Aug 19 00:13:43.709369 ignition[952]: INFO : Ignition finished successfully Aug 19 00:13:43.708166 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 19 00:13:43.711501 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 19 00:13:43.995272 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 19 00:13:43.996810 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 19 00:13:44.029122 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (964) Aug 19 00:13:44.031564 kernel: BTRFS info (device vda6): first mount of filesystem de95eca0-5455-4710-9904-3d3a2312ef33 Aug 19 00:13:44.031580 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Aug 19 00:13:44.031590 kernel: BTRFS info (device vda6): using free-space-tree Aug 19 00:13:44.034720 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 19 00:13:44.070952 ignition[981]: INFO : Ignition 2.21.0 Aug 19 00:13:44.070952 ignition[981]: INFO : Stage: files Aug 19 00:13:44.073616 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:13:44.073616 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:13:44.073616 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Aug 19 00:13:44.073616 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 19 00:13:44.073616 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 19 00:13:44.080018 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 19 00:13:44.080018 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 19 00:13:44.080018 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 19 00:13:44.080018 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 19 00:13:44.080018 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Aug 19 00:13:44.075999 unknown[981]: wrote ssh authorized keys file for user: core Aug 19 00:13:44.718270 systemd-networkd[794]: eth0: Gained IPv6LL Aug 19 00:13:44.749666 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 19 00:13:45.826234 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Aug 19 00:13:45.826234 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 19 00:13:45.830437 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Aug 19 00:13:46.031777 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Aug 19 00:13:46.159561 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Aug 19 00:13:46.159561 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Aug 19 00:13:46.164033 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Aug 19 00:13:46.164033 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 19 00:13:46.164033 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 19 00:13:46.164033 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 00:13:46.164033 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 19 00:13:46.164033 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 00:13:46.164033 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 19 00:13:46.164033 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 00:13:46.164033 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 19 00:13:46.164033 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 19 00:13:46.182793 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 19 00:13:46.182793 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 19 00:13:46.182793 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Aug 19 00:13:46.437547 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Aug 19 00:13:46.775820 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Aug 19 00:13:46.775820 ignition[981]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Aug 19 00:13:46.779448 ignition[981]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 00:13:46.783126 ignition[981]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 19 00:13:46.783126 ignition[981]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Aug 19 00:13:46.783126 ignition[981]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Aug 19 00:13:46.788158 ignition[981]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 00:13:46.788158 ignition[981]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Aug 19 00:13:46.788158 ignition[981]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Aug 19 00:13:46.788158 ignition[981]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Aug 19 00:13:46.806939 ignition[981]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 00:13:46.810709 ignition[981]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Aug 19 00:13:46.812531 ignition[981]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Aug 19 00:13:46.812531 ignition[981]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Aug 19 00:13:46.812531 ignition[981]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Aug 19 00:13:46.812531 ignition[981]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 19 00:13:46.812531 ignition[981]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 19 00:13:46.812531 ignition[981]: INFO : files: files passed Aug 19 00:13:46.812531 ignition[981]: INFO : Ignition finished successfully Aug 19 00:13:46.817147 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 19 00:13:46.820564 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 19 00:13:46.822852 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 19 00:13:46.838913 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 19 00:13:46.839035 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 19 00:13:46.842471 initrd-setup-root-after-ignition[1011]: grep: /sysroot/oem/oem-release: No such file or directory Aug 19 00:13:46.843972 initrd-setup-root-after-ignition[1013]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:13:46.843972 initrd-setup-root-after-ignition[1013]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:13:46.847157 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 19 00:13:46.846536 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 00:13:46.848723 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 19 00:13:46.851796 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 19 00:13:46.906321 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 19 00:13:46.906475 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 19 00:13:46.908938 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 19 00:13:46.910752 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 19 00:13:46.912694 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 19 00:13:46.913637 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 19 00:13:46.952345 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 00:13:46.956882 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 19 00:13:46.995759 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:13:46.997232 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:13:46.999543 systemd[1]: Stopped target timers.target - Timer Units. Aug 19 00:13:47.001392 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 19 00:13:47.001530 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 19 00:13:47.004138 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 19 00:13:47.006305 systemd[1]: Stopped target basic.target - Basic System. Aug 19 00:13:47.008145 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 19 00:13:47.010084 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 19 00:13:47.012394 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 19 00:13:47.014580 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 19 00:13:47.016674 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 19 00:13:47.018647 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 19 00:13:47.020900 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 19 00:13:47.023010 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 19 00:13:47.024937 systemd[1]: Stopped target swap.target - Swaps. Aug 19 00:13:47.026601 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 19 00:13:47.026746 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 19 00:13:47.029386 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:13:47.031582 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:13:47.033910 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 19 00:13:47.034075 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:13:47.036254 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 19 00:13:47.036396 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 19 00:13:47.039394 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 19 00:13:47.039523 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 19 00:13:47.041514 systemd[1]: Stopped target paths.target - Path Units. Aug 19 00:13:47.043176 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 19 00:13:47.043329 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:13:47.045525 systemd[1]: Stopped target slices.target - Slice Units. Aug 19 00:13:47.047448 systemd[1]: Stopped target sockets.target - Socket Units. Aug 19 00:13:47.049195 systemd[1]: iscsid.socket: Deactivated successfully. Aug 19 00:13:47.049308 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 19 00:13:47.051193 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 19 00:13:47.051286 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 19 00:13:47.053798 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 19 00:13:47.053930 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 19 00:13:47.055829 systemd[1]: ignition-files.service: Deactivated successfully. Aug 19 00:13:47.055937 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 19 00:13:47.058946 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 19 00:13:47.061417 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 19 00:13:47.062337 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 19 00:13:47.062466 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:13:47.064645 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 19 00:13:47.064739 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 19 00:13:47.070797 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 19 00:13:47.079302 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 19 00:13:47.088904 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 19 00:13:47.096402 ignition[1037]: INFO : Ignition 2.21.0 Aug 19 00:13:47.096402 ignition[1037]: INFO : Stage: umount Aug 19 00:13:47.098470 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 19 00:13:47.098470 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Aug 19 00:13:47.101824 ignition[1037]: INFO : umount: umount passed Aug 19 00:13:47.101824 ignition[1037]: INFO : Ignition finished successfully Aug 19 00:13:47.103012 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 19 00:13:47.103134 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 19 00:13:47.106775 systemd[1]: Stopped target network.target - Network. Aug 19 00:13:47.108139 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 19 00:13:47.108213 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 19 00:13:47.109928 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 19 00:13:47.109978 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 19 00:13:47.111730 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 19 00:13:47.111779 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 19 00:13:47.113375 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 19 00:13:47.113415 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 19 00:13:47.115418 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 19 00:13:47.117298 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 19 00:13:47.123216 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 19 00:13:47.123368 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 19 00:13:47.127794 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 19 00:13:47.128054 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 19 00:13:47.128092 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:13:47.132258 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 19 00:13:47.138212 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 19 00:13:47.140185 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 19 00:13:47.145810 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 19 00:13:47.146004 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 19 00:13:47.147352 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 19 00:13:47.147398 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:13:47.151861 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 19 00:13:47.153434 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 19 00:13:47.153507 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 19 00:13:47.157129 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 00:13:47.157180 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:13:47.161279 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 19 00:13:47.161335 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 19 00:13:47.162984 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:13:47.167896 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 19 00:13:47.168721 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 19 00:13:47.168822 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 19 00:13:47.173924 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 19 00:13:47.174014 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 19 00:13:47.180809 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 19 00:13:47.186318 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:13:47.188093 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 19 00:13:47.188165 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 19 00:13:47.190091 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 19 00:13:47.190148 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:13:47.191406 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 19 00:13:47.191470 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 19 00:13:47.194589 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 19 00:13:47.194651 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 19 00:13:47.197937 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 19 00:13:47.197994 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 19 00:13:47.202397 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 19 00:13:47.203918 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 19 00:13:47.203990 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:13:47.208017 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 19 00:13:47.208068 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:13:47.213238 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 19 00:13:47.213295 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 00:13:47.219511 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 19 00:13:47.219586 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:13:47.222514 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 19 00:13:47.222570 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:13:47.226757 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 19 00:13:47.226859 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 19 00:13:47.229302 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 19 00:13:47.229396 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 19 00:13:47.232230 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 19 00:13:47.235245 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 19 00:13:47.254613 systemd[1]: Switching root. Aug 19 00:13:47.289606 systemd-journald[245]: Journal stopped Aug 19 00:13:48.257765 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Aug 19 00:13:48.257817 kernel: SELinux: policy capability network_peer_controls=1 Aug 19 00:13:48.257829 kernel: SELinux: policy capability open_perms=1 Aug 19 00:13:48.257838 kernel: SELinux: policy capability extended_socket_class=1 Aug 19 00:13:48.257847 kernel: SELinux: policy capability always_check_network=0 Aug 19 00:13:48.257856 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 19 00:13:48.257871 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 19 00:13:48.257880 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 19 00:13:48.257898 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 19 00:13:48.257907 kernel: SELinux: policy capability userspace_initial_context=0 Aug 19 00:13:48.257919 kernel: audit: type=1403 audit(1755562427.533:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 19 00:13:48.257933 systemd[1]: Successfully loaded SELinux policy in 66.394ms. Aug 19 00:13:48.257949 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.805ms. Aug 19 00:13:48.257960 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 19 00:13:48.257974 systemd[1]: Detected virtualization kvm. Aug 19 00:13:48.257984 systemd[1]: Detected architecture arm64. Aug 19 00:13:48.257994 systemd[1]: Detected first boot. Aug 19 00:13:48.258005 systemd[1]: Initializing machine ID from VM UUID. Aug 19 00:13:48.258015 zram_generator::config[1083]: No configuration found. Aug 19 00:13:48.258027 kernel: NET: Registered PF_VSOCK protocol family Aug 19 00:13:48.258037 systemd[1]: Populated /etc with preset unit settings. Aug 19 00:13:48.258047 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 19 00:13:48.258057 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 19 00:13:48.258067 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 19 00:13:48.258078 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 19 00:13:48.258088 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 19 00:13:48.258099 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 19 00:13:48.258195 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 19 00:13:48.258219 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 19 00:13:48.258230 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 19 00:13:48.258240 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 19 00:13:48.258261 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 19 00:13:48.258271 systemd[1]: Created slice user.slice - User and Session Slice. Aug 19 00:13:48.258281 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 19 00:13:48.258292 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 19 00:13:48.258303 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 19 00:13:48.258313 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 19 00:13:48.258326 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 19 00:13:48.258336 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 19 00:13:48.258348 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Aug 19 00:13:48.258359 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 19 00:13:48.258370 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 19 00:13:48.258381 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 19 00:13:48.258391 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 19 00:13:48.258417 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 19 00:13:48.258427 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 19 00:13:48.258437 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 19 00:13:48.258447 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 19 00:13:48.258457 systemd[1]: Reached target slices.target - Slice Units. Aug 19 00:13:48.258467 systemd[1]: Reached target swap.target - Swaps. Aug 19 00:13:48.258478 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 19 00:13:48.258487 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 19 00:13:48.258497 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 19 00:13:48.258509 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 19 00:13:48.258519 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 19 00:13:48.258529 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 19 00:13:48.258540 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 19 00:13:48.258550 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 19 00:13:48.258560 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 19 00:13:48.258570 systemd[1]: Mounting media.mount - External Media Directory... Aug 19 00:13:48.258580 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 19 00:13:48.258590 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 19 00:13:48.258601 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 19 00:13:48.258612 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 19 00:13:48.258622 systemd[1]: Reached target machines.target - Containers. Aug 19 00:13:48.258632 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 19 00:13:48.258642 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:13:48.258652 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 19 00:13:48.258662 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 19 00:13:48.258672 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:13:48.258681 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 00:13:48.258693 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:13:48.258703 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 19 00:13:48.258712 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:13:48.258723 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 19 00:13:48.258732 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 19 00:13:48.258743 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 19 00:13:48.258753 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 19 00:13:48.258763 systemd[1]: Stopped systemd-fsck-usr.service. Aug 19 00:13:48.258775 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:13:48.258785 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 19 00:13:48.258794 kernel: loop: module loaded Aug 19 00:13:48.258804 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 19 00:13:48.258814 kernel: fuse: init (API version 7.41) Aug 19 00:13:48.258823 kernel: ACPI: bus type drm_connector registered Aug 19 00:13:48.258833 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 19 00:13:48.258843 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 19 00:13:48.258853 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 19 00:13:48.258864 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 19 00:13:48.258899 systemd-journald[1158]: Collecting audit messages is disabled. Aug 19 00:13:48.258923 systemd[1]: verity-setup.service: Deactivated successfully. Aug 19 00:13:48.258933 systemd[1]: Stopped verity-setup.service. Aug 19 00:13:48.258945 systemd-journald[1158]: Journal started Aug 19 00:13:48.258966 systemd-journald[1158]: Runtime Journal (/run/log/journal/673c8a83742c4e8fbe354b7c174f8f2d) is 6M, max 48.5M, 42.4M free. Aug 19 00:13:47.986077 systemd[1]: Queued start job for default target multi-user.target. Aug 19 00:13:48.008441 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Aug 19 00:13:48.008873 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 19 00:13:48.264351 systemd[1]: Started systemd-journald.service - Journal Service. Aug 19 00:13:48.265097 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 19 00:13:48.266454 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 19 00:13:48.267895 systemd[1]: Mounted media.mount - External Media Directory. Aug 19 00:13:48.269157 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 19 00:13:48.270444 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 19 00:13:48.271759 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 19 00:13:48.275136 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 19 00:13:48.276770 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 19 00:13:48.278482 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 19 00:13:48.278679 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 19 00:13:48.280424 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:13:48.280587 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:13:48.283463 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 00:13:48.283639 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 00:13:48.285190 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:13:48.285382 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:13:48.286801 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 19 00:13:48.286960 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 19 00:13:48.288426 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:13:48.288599 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:13:48.290240 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 19 00:13:48.291774 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 19 00:13:48.293456 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 19 00:13:48.295244 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 19 00:13:48.308383 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 19 00:13:48.310902 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 19 00:13:48.313131 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 19 00:13:48.314770 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 19 00:13:48.314803 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 19 00:13:48.316831 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 19 00:13:48.323974 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 19 00:13:48.325296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:13:48.326486 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 19 00:13:48.328714 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 19 00:13:48.330192 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 00:13:48.332299 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 19 00:13:48.334028 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 00:13:48.336266 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:13:48.339499 systemd-journald[1158]: Time spent on flushing to /var/log/journal/673c8a83742c4e8fbe354b7c174f8f2d is 16.809ms for 885 entries. Aug 19 00:13:48.339499 systemd-journald[1158]: System Journal (/var/log/journal/673c8a83742c4e8fbe354b7c174f8f2d) is 8M, max 195.6M, 187.6M free. Aug 19 00:13:48.370280 systemd-journald[1158]: Received client request to flush runtime journal. Aug 19 00:13:48.370325 kernel: loop0: detected capacity change from 0 to 119320 Aug 19 00:13:48.339362 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 19 00:13:48.342908 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 19 00:13:48.347189 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 19 00:13:48.348994 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 19 00:13:48.350779 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 19 00:13:48.354782 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 19 00:13:48.356827 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 19 00:13:48.359757 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 19 00:13:48.373331 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 19 00:13:48.381389 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Aug 19 00:13:48.381400 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Aug 19 00:13:48.383281 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:13:48.387541 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 19 00:13:48.385998 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 19 00:13:48.393506 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 19 00:13:48.406666 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 19 00:13:48.414351 kernel: loop1: detected capacity change from 0 to 207008 Aug 19 00:13:48.434182 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 19 00:13:48.437737 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 19 00:13:48.451185 kernel: loop2: detected capacity change from 0 to 100608 Aug 19 00:13:48.455426 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Aug 19 00:13:48.455446 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Aug 19 00:13:48.459027 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 19 00:13:48.483175 kernel: loop3: detected capacity change from 0 to 119320 Aug 19 00:13:48.490139 kernel: loop4: detected capacity change from 0 to 207008 Aug 19 00:13:48.501180 kernel: loop5: detected capacity change from 0 to 100608 Aug 19 00:13:48.509241 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Aug 19 00:13:48.509651 (sd-merge)[1225]: Merged extensions into '/usr'. Aug 19 00:13:48.514548 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Aug 19 00:13:48.514695 systemd[1]: Reloading... Aug 19 00:13:48.598299 zram_generator::config[1251]: No configuration found. Aug 19 00:13:48.705270 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 19 00:13:48.760258 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 19 00:13:48.760514 systemd[1]: Reloading finished in 245 ms. Aug 19 00:13:48.794970 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 19 00:13:48.798138 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 19 00:13:48.817740 systemd[1]: Starting ensure-sysext.service... Aug 19 00:13:48.819839 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 19 00:13:48.836759 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Aug 19 00:13:48.836781 systemd[1]: Reloading... Aug 19 00:13:48.838561 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 19 00:13:48.838708 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 19 00:13:48.838994 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 19 00:13:48.839250 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 19 00:13:48.839903 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 19 00:13:48.840137 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Aug 19 00:13:48.840194 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Aug 19 00:13:48.843585 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 00:13:48.843599 systemd-tmpfiles[1286]: Skipping /boot Aug 19 00:13:48.849734 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Aug 19 00:13:48.849752 systemd-tmpfiles[1286]: Skipping /boot Aug 19 00:13:48.884129 zram_generator::config[1311]: No configuration found. Aug 19 00:13:49.027532 systemd[1]: Reloading finished in 190 ms. Aug 19 00:13:49.037818 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 19 00:13:49.041616 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 19 00:13:49.066650 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:13:49.072250 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 19 00:13:49.077296 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 19 00:13:49.083804 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 19 00:13:49.096221 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 19 00:13:49.106840 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 19 00:13:49.123298 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 19 00:13:49.135114 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 19 00:13:49.141978 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:13:49.143852 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:13:49.146454 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:13:49.162570 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Aug 19 00:13:49.165137 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:13:49.166433 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:13:49.166644 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:13:49.168074 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 19 00:13:49.174444 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 19 00:13:49.175705 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 00:13:49.178276 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:13:49.178542 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:13:49.180429 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:13:49.180665 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:13:49.182522 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:13:49.182679 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:13:49.183842 augenrules[1381]: No rules Aug 19 00:13:49.184558 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:13:49.194353 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:13:49.195901 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 19 00:13:49.198373 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 19 00:13:49.204910 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 19 00:13:49.224141 systemd[1]: Finished ensure-sysext.service. Aug 19 00:13:49.228907 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:13:49.231704 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 19 00:13:49.234548 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 19 00:13:49.238419 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 19 00:13:49.242889 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 19 00:13:49.255488 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 19 00:13:49.256746 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 19 00:13:49.256801 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 19 00:13:49.258762 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 19 00:13:49.266340 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 19 00:13:49.267555 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 19 00:13:49.267887 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 19 00:13:49.269946 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Aug 19 00:13:49.293604 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 19 00:13:49.295211 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 19 00:13:49.296982 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 19 00:13:49.297233 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 19 00:13:49.298824 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 19 00:13:49.298984 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 19 00:13:49.300592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 19 00:13:49.300742 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 19 00:13:49.302088 augenrules[1419]: /sbin/augenrules: No change Aug 19 00:13:49.305184 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 19 00:13:49.305269 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 19 00:13:49.311491 augenrules[1456]: No rules Aug 19 00:13:49.317665 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:13:49.318252 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:13:49.371711 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Aug 19 00:13:49.374631 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 19 00:13:49.380037 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 19 00:13:49.410699 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 19 00:13:49.441038 systemd-networkd[1427]: lo: Link UP Aug 19 00:13:49.441046 systemd-networkd[1427]: lo: Gained carrier Aug 19 00:13:49.442534 systemd-networkd[1427]: Enumeration completed Aug 19 00:13:49.442665 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 19 00:13:49.442943 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:13:49.442950 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 19 00:13:49.443536 systemd-networkd[1427]: eth0: Link UP Aug 19 00:13:49.443648 systemd-networkd[1427]: eth0: Gained carrier Aug 19 00:13:49.443662 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 19 00:13:49.444383 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 19 00:13:49.446451 systemd[1]: Reached target time-set.target - System Time Set. Aug 19 00:13:49.448636 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 19 00:13:49.451331 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 19 00:13:49.464183 systemd-networkd[1427]: eth0: DHCPv4 address 10.0.0.59/16, gateway 10.0.0.1 acquired from 10.0.0.1 Aug 19 00:13:49.464742 systemd-timesyncd[1431]: Network configuration changed, trying to establish connection. Aug 19 00:13:49.470097 systemd-timesyncd[1431]: Contacted time server 10.0.0.1:123 (10.0.0.1). Aug 19 00:13:49.470161 systemd-timesyncd[1431]: Initial clock synchronization to Tue 2025-08-19 00:13:49.844335 UTC. Aug 19 00:13:49.475933 systemd-resolved[1353]: Positive Trust Anchors: Aug 19 00:13:49.475947 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 19 00:13:49.475979 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 19 00:13:49.481159 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 19 00:13:49.483023 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 19 00:13:49.486411 systemd-resolved[1353]: Defaulting to hostname 'linux'. Aug 19 00:13:49.488448 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 19 00:13:49.489996 systemd[1]: Reached target network.target - Network. Aug 19 00:13:49.491130 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 19 00:13:49.492380 systemd[1]: Reached target sysinit.target - System Initialization. Aug 19 00:13:49.493636 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 19 00:13:49.494916 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 19 00:13:49.496379 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 19 00:13:49.497721 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 19 00:13:49.498993 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 19 00:13:49.500270 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 19 00:13:49.500307 systemd[1]: Reached target paths.target - Path Units. Aug 19 00:13:49.501314 systemd[1]: Reached target timers.target - Timer Units. Aug 19 00:13:49.503075 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 19 00:13:49.505628 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 19 00:13:49.508683 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 19 00:13:49.510172 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 19 00:13:49.511471 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 19 00:13:49.514691 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 19 00:13:49.516422 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 19 00:13:49.518203 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 19 00:13:49.519413 systemd[1]: Reached target sockets.target - Socket Units. Aug 19 00:13:49.520430 systemd[1]: Reached target basic.target - Basic System. Aug 19 00:13:49.521408 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 19 00:13:49.521441 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 19 00:13:49.522441 systemd[1]: Starting containerd.service - containerd container runtime... Aug 19 00:13:49.524537 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 19 00:13:49.526590 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 19 00:13:49.528986 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 19 00:13:49.531311 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 19 00:13:49.532454 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 19 00:13:49.533433 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 19 00:13:49.536282 jq[1499]: false Aug 19 00:13:49.536464 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 19 00:13:49.540264 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 19 00:13:49.544076 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 19 00:13:49.553488 extend-filesystems[1500]: Found /dev/vda6 Aug 19 00:13:49.552564 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 19 00:13:49.554592 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 19 00:13:49.555138 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 19 00:13:49.555765 systemd[1]: Starting update-engine.service - Update Engine... Aug 19 00:13:49.560322 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 19 00:13:49.563243 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 19 00:13:49.564103 extend-filesystems[1500]: Found /dev/vda9 Aug 19 00:13:49.566081 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 19 00:13:49.570474 extend-filesystems[1500]: Checking size of /dev/vda9 Aug 19 00:13:49.570669 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 19 00:13:49.571039 systemd[1]: motdgen.service: Deactivated successfully. Aug 19 00:13:49.571308 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 19 00:13:49.574409 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 19 00:13:49.574596 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 19 00:13:49.583767 (ntainerd)[1525]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 19 00:13:49.588493 jq[1514]: true Aug 19 00:13:49.593303 extend-filesystems[1500]: Resized partition /dev/vda9 Aug 19 00:13:49.602935 extend-filesystems[1537]: resize2fs 1.47.2 (1-Jan-2025) Aug 19 00:13:49.616109 jq[1536]: true Aug 19 00:13:49.628413 tar[1524]: linux-arm64/LICENSE Aug 19 00:13:49.628674 tar[1524]: linux-arm64/helm Aug 19 00:13:49.632148 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Aug 19 00:13:49.636626 update_engine[1512]: I20250819 00:13:49.636334 1512 main.cc:92] Flatcar Update Engine starting Aug 19 00:13:49.642582 dbus-daemon[1497]: [system] SELinux support is enabled Aug 19 00:13:49.645293 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 19 00:13:49.649014 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 19 00:13:49.649044 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 19 00:13:49.652006 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 19 00:13:49.652033 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 19 00:13:49.663304 systemd[1]: Started update-engine.service - Update Engine. Aug 19 00:13:49.664507 update_engine[1512]: I20250819 00:13:49.664448 1512 update_check_scheduler.cc:74] Next update check in 3m11s Aug 19 00:13:49.670418 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 19 00:13:49.683092 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Aug 19 00:13:49.683337 systemd-logind[1510]: Watching system buttons on /dev/input/event0 (Power Button) Aug 19 00:13:49.684522 systemd-logind[1510]: New seat seat0. Aug 19 00:13:49.684836 extend-filesystems[1537]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Aug 19 00:13:49.684836 extend-filesystems[1537]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 19 00:13:49.684836 extend-filesystems[1537]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Aug 19 00:13:49.701468 extend-filesystems[1500]: Resized filesystem in /dev/vda9 Aug 19 00:13:49.703426 bash[1556]: Updated "/home/core/.ssh/authorized_keys" Aug 19 00:13:49.688954 systemd[1]: Started systemd-logind.service - User Login Management. Aug 19 00:13:49.691775 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 19 00:13:49.693551 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 19 00:13:49.693735 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 19 00:13:49.700386 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 19 00:13:49.743278 locksmithd[1557]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 19 00:13:49.854068 containerd[1525]: time="2025-08-19T00:13:49Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 19 00:13:49.855980 containerd[1525]: time="2025-08-19T00:13:49.855943800Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Aug 19 00:13:49.865394 containerd[1525]: time="2025-08-19T00:13:49.865348160Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.52µs" Aug 19 00:13:49.867009 containerd[1525]: time="2025-08-19T00:13:49.865483200Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 19 00:13:49.867009 containerd[1525]: time="2025-08-19T00:13:49.865507960Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 19 00:13:49.867009 containerd[1525]: time="2025-08-19T00:13:49.865667520Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 19 00:13:49.867009 containerd[1525]: time="2025-08-19T00:13:49.865691800Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 19 00:13:49.867009 containerd[1525]: time="2025-08-19T00:13:49.865715840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 00:13:49.867009 containerd[1525]: time="2025-08-19T00:13:49.865764120Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 19 00:13:49.867009 containerd[1525]: time="2025-08-19T00:13:49.865776160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 00:13:49.867009 containerd[1525]: time="2025-08-19T00:13:49.865985200Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 19 00:13:49.867009 containerd[1525]: time="2025-08-19T00:13:49.865998720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 00:13:49.867009 containerd[1525]: time="2025-08-19T00:13:49.866008680Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 19 00:13:49.867009 containerd[1525]: time="2025-08-19T00:13:49.866016520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 19 00:13:49.867009 containerd[1525]: time="2025-08-19T00:13:49.866077320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 19 00:13:49.867319 containerd[1525]: time="2025-08-19T00:13:49.867085560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 00:13:49.867737 containerd[1525]: time="2025-08-19T00:13:49.867708320Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 19 00:13:49.867737 containerd[1525]: time="2025-08-19T00:13:49.867734240Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 19 00:13:49.868044 containerd[1525]: time="2025-08-19T00:13:49.867982680Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 19 00:13:49.868596 containerd[1525]: time="2025-08-19T00:13:49.868570240Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 19 00:13:49.868690 containerd[1525]: time="2025-08-19T00:13:49.868673480Z" level=info msg="metadata content store policy set" policy=shared Aug 19 00:13:49.873294 containerd[1525]: time="2025-08-19T00:13:49.873201800Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 19 00:13:49.873428 containerd[1525]: time="2025-08-19T00:13:49.873360520Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 19 00:13:49.873480 containerd[1525]: time="2025-08-19T00:13:49.873436680Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 19 00:13:49.873514 containerd[1525]: time="2025-08-19T00:13:49.873493240Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 19 00:13:49.873588 containerd[1525]: time="2025-08-19T00:13:49.873555360Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 19 00:13:49.873588 containerd[1525]: time="2025-08-19T00:13:49.873582400Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 19 00:13:49.873634 containerd[1525]: time="2025-08-19T00:13:49.873598400Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 19 00:13:49.873634 containerd[1525]: time="2025-08-19T00:13:49.873612800Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 19 00:13:49.873634 containerd[1525]: time="2025-08-19T00:13:49.873630640Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 19 00:13:49.873699 containerd[1525]: time="2025-08-19T00:13:49.873664920Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 19 00:13:49.873699 containerd[1525]: time="2025-08-19T00:13:49.873677840Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 19 00:13:49.873699 containerd[1525]: time="2025-08-19T00:13:49.873694080Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 19 00:13:49.873870 containerd[1525]: time="2025-08-19T00:13:49.873840720Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 19 00:13:49.873894 containerd[1525]: time="2025-08-19T00:13:49.873873240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 19 00:13:49.873894 containerd[1525]: time="2025-08-19T00:13:49.873891040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 19 00:13:49.873931 containerd[1525]: time="2025-08-19T00:13:49.873903480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 19 00:13:49.873931 containerd[1525]: time="2025-08-19T00:13:49.873915280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 19 00:13:49.873931 containerd[1525]: time="2025-08-19T00:13:49.873929200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 19 00:13:49.873980 containerd[1525]: time="2025-08-19T00:13:49.873942280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 19 00:13:49.873980 containerd[1525]: time="2025-08-19T00:13:49.873955000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 19 00:13:49.873980 containerd[1525]: time="2025-08-19T00:13:49.873967240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 19 00:13:49.873980 containerd[1525]: time="2025-08-19T00:13:49.873978720Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 19 00:13:49.874062 containerd[1525]: time="2025-08-19T00:13:49.873992120Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 19 00:13:49.874613 containerd[1525]: time="2025-08-19T00:13:49.874583760Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 19 00:13:49.874639 containerd[1525]: time="2025-08-19T00:13:49.874613680Z" level=info msg="Start snapshots syncer" Aug 19 00:13:49.875523 containerd[1525]: time="2025-08-19T00:13:49.875412560Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 19 00:13:49.876959 containerd[1525]: time="2025-08-19T00:13:49.876686560Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 19 00:13:49.877051 containerd[1525]: time="2025-08-19T00:13:49.876972760Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 19 00:13:49.877273 containerd[1525]: time="2025-08-19T00:13:49.877249360Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 19 00:13:49.877592 containerd[1525]: time="2025-08-19T00:13:49.877554000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 19 00:13:49.877626 containerd[1525]: time="2025-08-19T00:13:49.877595000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 19 00:13:49.877626 containerd[1525]: time="2025-08-19T00:13:49.877609840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 19 00:13:49.877626 containerd[1525]: time="2025-08-19T00:13:49.877621280Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 19 00:13:49.877691 containerd[1525]: time="2025-08-19T00:13:49.877634120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 19 00:13:49.877691 containerd[1525]: time="2025-08-19T00:13:49.877648800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 19 00:13:49.877691 containerd[1525]: time="2025-08-19T00:13:49.877660920Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 19 00:13:49.877691 containerd[1525]: time="2025-08-19T00:13:49.877687520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 19 00:13:49.877757 containerd[1525]: time="2025-08-19T00:13:49.877700640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 19 00:13:49.877757 containerd[1525]: time="2025-08-19T00:13:49.877712360Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 19 00:13:49.877897 containerd[1525]: time="2025-08-19T00:13:49.877861840Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 00:13:49.878113 containerd[1525]: time="2025-08-19T00:13:49.878078960Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 19 00:13:49.878140 containerd[1525]: time="2025-08-19T00:13:49.878113280Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 00:13:49.878202 containerd[1525]: time="2025-08-19T00:13:49.878176320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 19 00:13:49.878227 containerd[1525]: time="2025-08-19T00:13:49.878203560Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 19 00:13:49.878319 containerd[1525]: time="2025-08-19T00:13:49.878263880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 19 00:13:49.878340 containerd[1525]: time="2025-08-19T00:13:49.878323800Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 19 00:13:49.878612 containerd[1525]: time="2025-08-19T00:13:49.878591960Z" level=info msg="runtime interface created" Aug 19 00:13:49.878612 containerd[1525]: time="2025-08-19T00:13:49.878602000Z" level=info msg="created NRI interface" Aug 19 00:13:49.878651 containerd[1525]: time="2025-08-19T00:13:49.878613520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 19 00:13:49.878651 containerd[1525]: time="2025-08-19T00:13:49.878627000Z" level=info msg="Connect containerd service" Aug 19 00:13:49.878685 containerd[1525]: time="2025-08-19T00:13:49.878660960Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 19 00:13:49.880978 containerd[1525]: time="2025-08-19T00:13:49.880940040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 00:13:49.954649 tar[1524]: linux-arm64/README.md Aug 19 00:13:49.973204 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 19 00:13:49.979565 containerd[1525]: time="2025-08-19T00:13:49.979493880Z" level=info msg="Start subscribing containerd event" Aug 19 00:13:49.979676 containerd[1525]: time="2025-08-19T00:13:49.979583240Z" level=info msg="Start recovering state" Aug 19 00:13:49.979732 containerd[1525]: time="2025-08-19T00:13:49.979714920Z" level=info msg="Start event monitor" Aug 19 00:13:49.979844 containerd[1525]: time="2025-08-19T00:13:49.979739200Z" level=info msg="Start cni network conf syncer for default" Aug 19 00:13:49.979844 containerd[1525]: time="2025-08-19T00:13:49.979747680Z" level=info msg="Start streaming server" Aug 19 00:13:49.979844 containerd[1525]: time="2025-08-19T00:13:49.979757840Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 19 00:13:49.979844 containerd[1525]: time="2025-08-19T00:13:49.979766000Z" level=info msg="runtime interface starting up..." Aug 19 00:13:49.979844 containerd[1525]: time="2025-08-19T00:13:49.979782880Z" level=info msg="starting plugins..." Aug 19 00:13:49.979844 containerd[1525]: time="2025-08-19T00:13:49.979799160Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 19 00:13:49.980035 containerd[1525]: time="2025-08-19T00:13:49.979931240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 19 00:13:49.980035 containerd[1525]: time="2025-08-19T00:13:49.979979440Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 19 00:13:49.980077 containerd[1525]: time="2025-08-19T00:13:49.980058000Z" level=info msg="containerd successfully booted in 0.126396s" Aug 19 00:13:49.980256 systemd[1]: Started containerd.service - containerd container runtime. Aug 19 00:13:50.763469 sshd_keygen[1521]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 19 00:13:50.783695 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 19 00:13:50.788726 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 19 00:13:50.798269 systemd-networkd[1427]: eth0: Gained IPv6LL Aug 19 00:13:50.804357 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 19 00:13:50.806506 systemd[1]: Reached target network-online.target - Network is Online. Aug 19 00:13:50.809081 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Aug 19 00:13:50.811844 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:13:50.833497 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 19 00:13:50.836860 systemd[1]: issuegen.service: Deactivated successfully. Aug 19 00:13:50.838232 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 19 00:13:50.844970 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 19 00:13:50.856543 systemd[1]: coreos-metadata.service: Deactivated successfully. Aug 19 00:13:50.856765 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Aug 19 00:13:50.858819 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 19 00:13:50.863006 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 19 00:13:50.864535 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 19 00:13:50.866036 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Aug 19 00:13:50.867722 systemd[1]: Reached target getty.target - Login Prompts. Aug 19 00:13:50.869563 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 19 00:13:51.441121 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:13:51.442946 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 19 00:13:51.446385 (kubelet)[1629]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:13:51.448286 systemd[1]: Startup finished in 2.104s (kernel) + 6.866s (initrd) + 3.998s (userspace) = 12.969s. Aug 19 00:13:51.929748 kubelet[1629]: E0819 00:13:51.929638 1629 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:13:51.932323 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:13:51.932460 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:13:51.932834 systemd[1]: kubelet.service: Consumed 811ms CPU time, 259.3M memory peak. Aug 19 00:13:54.114922 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 19 00:13:54.116148 systemd[1]: Started sshd@0-10.0.0.59:22-10.0.0.1:36930.service - OpenSSH per-connection server daemon (10.0.0.1:36930). Aug 19 00:13:54.197464 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 36930 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:13:54.199388 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:13:54.205521 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 19 00:13:54.206489 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 19 00:13:54.214165 systemd-logind[1510]: New session 1 of user core. Aug 19 00:13:54.232204 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 19 00:13:54.234722 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 19 00:13:54.253677 (systemd)[1647]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 19 00:13:54.256111 systemd-logind[1510]: New session c1 of user core. Aug 19 00:13:54.369088 systemd[1647]: Queued start job for default target default.target. Aug 19 00:13:54.393264 systemd[1647]: Created slice app.slice - User Application Slice. Aug 19 00:13:54.393298 systemd[1647]: Reached target paths.target - Paths. Aug 19 00:13:54.393351 systemd[1647]: Reached target timers.target - Timers. Aug 19 00:13:54.394719 systemd[1647]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 19 00:13:54.408472 systemd[1647]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 19 00:13:54.408617 systemd[1647]: Reached target sockets.target - Sockets. Aug 19 00:13:54.408665 systemd[1647]: Reached target basic.target - Basic System. Aug 19 00:13:54.408693 systemd[1647]: Reached target default.target - Main User Target. Aug 19 00:13:54.408720 systemd[1647]: Startup finished in 146ms. Aug 19 00:13:54.408982 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 19 00:13:54.410404 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 19 00:13:54.484982 systemd[1]: Started sshd@1-10.0.0.59:22-10.0.0.1:36936.service - OpenSSH per-connection server daemon (10.0.0.1:36936). Aug 19 00:13:54.532919 sshd[1658]: Accepted publickey for core from 10.0.0.1 port 36936 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:13:54.534318 sshd-session[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:13:54.538452 systemd-logind[1510]: New session 2 of user core. Aug 19 00:13:54.550575 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 19 00:13:54.608957 sshd[1661]: Connection closed by 10.0.0.1 port 36936 Aug 19 00:13:54.609453 sshd-session[1658]: pam_unix(sshd:session): session closed for user core Aug 19 00:13:54.624463 systemd[1]: sshd@1-10.0.0.59:22-10.0.0.1:36936.service: Deactivated successfully. Aug 19 00:13:54.626279 systemd[1]: session-2.scope: Deactivated successfully. Aug 19 00:13:54.627056 systemd-logind[1510]: Session 2 logged out. Waiting for processes to exit. Aug 19 00:13:54.630771 systemd[1]: Started sshd@2-10.0.0.59:22-10.0.0.1:36946.service - OpenSSH per-connection server daemon (10.0.0.1:36946). Aug 19 00:13:54.631526 systemd-logind[1510]: Removed session 2. Aug 19 00:13:54.685879 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 36946 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:13:54.687299 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:13:54.692073 systemd-logind[1510]: New session 3 of user core. Aug 19 00:13:54.702363 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 19 00:13:54.754361 sshd[1670]: Connection closed by 10.0.0.1 port 36946 Aug 19 00:13:54.754762 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Aug 19 00:13:54.767034 systemd[1]: sshd@2-10.0.0.59:22-10.0.0.1:36946.service: Deactivated successfully. Aug 19 00:13:54.770738 systemd[1]: session-3.scope: Deactivated successfully. Aug 19 00:13:54.771608 systemd-logind[1510]: Session 3 logged out. Waiting for processes to exit. Aug 19 00:13:54.774400 systemd[1]: Started sshd@3-10.0.0.59:22-10.0.0.1:36948.service - OpenSSH per-connection server daemon (10.0.0.1:36948). Aug 19 00:13:54.774954 systemd-logind[1510]: Removed session 3. Aug 19 00:13:54.833087 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 36948 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:13:54.834429 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:13:54.838644 systemd-logind[1510]: New session 4 of user core. Aug 19 00:13:54.850364 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 19 00:13:54.903714 sshd[1679]: Connection closed by 10.0.0.1 port 36948 Aug 19 00:13:54.903299 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Aug 19 00:13:54.911246 systemd[1]: sshd@3-10.0.0.59:22-10.0.0.1:36948.service: Deactivated successfully. Aug 19 00:13:54.913568 systemd[1]: session-4.scope: Deactivated successfully. Aug 19 00:13:54.914499 systemd-logind[1510]: Session 4 logged out. Waiting for processes to exit. Aug 19 00:13:54.917355 systemd[1]: Started sshd@4-10.0.0.59:22-10.0.0.1:36962.service - OpenSSH per-connection server daemon (10.0.0.1:36962). Aug 19 00:13:54.917817 systemd-logind[1510]: Removed session 4. Aug 19 00:13:54.978918 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 36962 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:13:54.980190 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:13:54.983959 systemd-logind[1510]: New session 5 of user core. Aug 19 00:13:55.000375 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 19 00:13:55.067817 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 19 00:13:55.068150 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:13:55.082181 sudo[1689]: pam_unix(sudo:session): session closed for user root Aug 19 00:13:55.085003 sshd[1688]: Connection closed by 10.0.0.1 port 36962 Aug 19 00:13:55.084141 sshd-session[1685]: pam_unix(sshd:session): session closed for user core Aug 19 00:13:55.093335 systemd[1]: sshd@4-10.0.0.59:22-10.0.0.1:36962.service: Deactivated successfully. Aug 19 00:13:55.096531 systemd[1]: session-5.scope: Deactivated successfully. Aug 19 00:13:55.097358 systemd-logind[1510]: Session 5 logged out. Waiting for processes to exit. Aug 19 00:13:55.099829 systemd[1]: Started sshd@5-10.0.0.59:22-10.0.0.1:36978.service - OpenSSH per-connection server daemon (10.0.0.1:36978). Aug 19 00:13:55.100287 systemd-logind[1510]: Removed session 5. Aug 19 00:13:55.156907 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 36978 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:13:55.158384 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:13:55.162598 systemd-logind[1510]: New session 6 of user core. Aug 19 00:13:55.179335 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 19 00:13:55.232406 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 19 00:13:55.232683 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:13:55.237106 sudo[1700]: pam_unix(sudo:session): session closed for user root Aug 19 00:13:55.241575 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 19 00:13:55.241828 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:13:55.250806 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 19 00:13:55.294677 augenrules[1722]: No rules Aug 19 00:13:55.295867 systemd[1]: audit-rules.service: Deactivated successfully. Aug 19 00:13:55.296144 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 19 00:13:55.298610 sudo[1699]: pam_unix(sudo:session): session closed for user root Aug 19 00:13:55.300228 sshd[1698]: Connection closed by 10.0.0.1 port 36978 Aug 19 00:13:55.300335 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Aug 19 00:13:55.307064 systemd[1]: sshd@5-10.0.0.59:22-10.0.0.1:36978.service: Deactivated successfully. Aug 19 00:13:55.308768 systemd[1]: session-6.scope: Deactivated successfully. Aug 19 00:13:55.309572 systemd-logind[1510]: Session 6 logged out. Waiting for processes to exit. Aug 19 00:13:55.312869 systemd[1]: Started sshd@6-10.0.0.59:22-10.0.0.1:36980.service - OpenSSH per-connection server daemon (10.0.0.1:36980). Aug 19 00:13:55.313361 systemd-logind[1510]: Removed session 6. Aug 19 00:13:55.367250 sshd[1731]: Accepted publickey for core from 10.0.0.1 port 36980 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:13:55.368517 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:13:55.372606 systemd-logind[1510]: New session 7 of user core. Aug 19 00:13:55.381325 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 19 00:13:55.436753 sudo[1735]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 19 00:13:55.437977 sudo[1735]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 19 00:13:55.778257 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 19 00:13:55.796505 (dockerd)[1756]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 19 00:13:56.052245 dockerd[1756]: time="2025-08-19T00:13:56.052097935Z" level=info msg="Starting up" Aug 19 00:13:56.053216 dockerd[1756]: time="2025-08-19T00:13:56.053185544Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 19 00:13:56.063466 dockerd[1756]: time="2025-08-19T00:13:56.063417872Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Aug 19 00:13:56.160106 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3073511586-merged.mount: Deactivated successfully. Aug 19 00:13:56.179393 dockerd[1756]: time="2025-08-19T00:13:56.179344091Z" level=info msg="Loading containers: start." Aug 19 00:13:56.187160 kernel: Initializing XFRM netlink socket Aug 19 00:13:56.414482 systemd-networkd[1427]: docker0: Link UP Aug 19 00:13:56.418664 dockerd[1756]: time="2025-08-19T00:13:56.418612166Z" level=info msg="Loading containers: done." Aug 19 00:13:56.435858 dockerd[1756]: time="2025-08-19T00:13:56.435799580Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 19 00:13:56.436019 dockerd[1756]: time="2025-08-19T00:13:56.435965717Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Aug 19 00:13:56.436093 dockerd[1756]: time="2025-08-19T00:13:56.436064346Z" level=info msg="Initializing buildkit" Aug 19 00:13:56.464847 dockerd[1756]: time="2025-08-19T00:13:56.464785535Z" level=info msg="Completed buildkit initialization" Aug 19 00:13:56.470216 dockerd[1756]: time="2025-08-19T00:13:56.470169467Z" level=info msg="Daemon has completed initialization" Aug 19 00:13:56.470731 dockerd[1756]: time="2025-08-19T00:13:56.470251800Z" level=info msg="API listen on /run/docker.sock" Aug 19 00:13:56.470420 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 19 00:13:57.079153 containerd[1525]: time="2025-08-19T00:13:57.079090540Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\"" Aug 19 00:13:57.717463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3165970708.mount: Deactivated successfully. Aug 19 00:13:58.577939 containerd[1525]: time="2025-08-19T00:13:58.577877996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:58.578942 containerd[1525]: time="2025-08-19T00:13:58.578907933Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.8: active requests=0, bytes read=26328359" Aug 19 00:13:58.579649 containerd[1525]: time="2025-08-19T00:13:58.579619309Z" level=info msg="ImageCreate event name:\"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:58.583268 containerd[1525]: time="2025-08-19T00:13:58.583201631Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:58.584243 containerd[1525]: time="2025-08-19T00:13:58.584206369Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.8\" with image id \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6e1a2f9b24f69ee77d0c0edaf32b31fdbb5e1a613f4476272197e6e1e239050b\", size \"26325157\" in 1.505049457s" Aug 19 00:13:58.584431 containerd[1525]: time="2025-08-19T00:13:58.584345206Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.8\" returns image reference \"sha256:61d628eec7e2101b908b4476f1e8e620490a9e8754184860c8eed25183acaa8a\"" Aug 19 00:13:58.585190 containerd[1525]: time="2025-08-19T00:13:58.585136243Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\"" Aug 19 00:13:59.648066 containerd[1525]: time="2025-08-19T00:13:59.647981656Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:59.649173 containerd[1525]: time="2025-08-19T00:13:59.649137206Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.8: active requests=0, bytes read=22528554" Aug 19 00:13:59.650393 containerd[1525]: time="2025-08-19T00:13:59.650332833Z" level=info msg="ImageCreate event name:\"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:59.653680 containerd[1525]: time="2025-08-19T00:13:59.653633582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:13:59.654591 containerd[1525]: time="2025-08-19T00:13:59.654552246Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.8\" with image id \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:8788ccd28ceed9e2e5f8fc31375ef5771df8ea6e518b362c9a06f3cc709cd6c7\", size \"24065666\" in 1.069304657s" Aug 19 00:13:59.654591 containerd[1525]: time="2025-08-19T00:13:59.654582547Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.8\" returns image reference \"sha256:f17de36e40fc7cc372be0021b2c58ad61f05d3ebe4d430551bc5e4cd9ed2a061\"" Aug 19 00:13:59.655048 containerd[1525]: time="2025-08-19T00:13:59.655018028Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\"" Aug 19 00:14:00.766920 containerd[1525]: time="2025-08-19T00:14:00.766866488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:00.767985 containerd[1525]: time="2025-08-19T00:14:00.767797367Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.8: active requests=0, bytes read=17483529" Aug 19 00:14:00.768882 containerd[1525]: time="2025-08-19T00:14:00.768822594Z" level=info msg="ImageCreate event name:\"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:00.771455 containerd[1525]: time="2025-08-19T00:14:00.771408963Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:00.772598 containerd[1525]: time="2025-08-19T00:14:00.772537849Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.8\" with image id \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:43c58bcbd1c7812dd19f8bfa5ae11093ebefd28699453ce86fc710869e155cd4\", size \"19020659\" in 1.11748536s" Aug 19 00:14:00.772598 containerd[1525]: time="2025-08-19T00:14:00.772575224Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.8\" returns image reference \"sha256:fe86d26bce3ccd5f0c4057c205b63fde1c8c752778025aea4605ffc3b0f80211\"" Aug 19 00:14:00.773130 containerd[1525]: time="2025-08-19T00:14:00.773065586Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\"" Aug 19 00:14:01.764617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1079241773.mount: Deactivated successfully. Aug 19 00:14:02.148437 containerd[1525]: time="2025-08-19T00:14:02.148301051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:02.149649 containerd[1525]: time="2025-08-19T00:14:02.149413800Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.8: active requests=0, bytes read=27376726" Aug 19 00:14:02.150600 containerd[1525]: time="2025-08-19T00:14:02.150559779Z" level=info msg="ImageCreate event name:\"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:02.152279 containerd[1525]: time="2025-08-19T00:14:02.152247173Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:02.152869 containerd[1525]: time="2025-08-19T00:14:02.152840432Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.8\" with image id \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\", repo tag \"registry.k8s.io/kube-proxy:v1.32.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:adc1335b480ddd833aac3b0bd20f68ff0f3c3cf7a0bd337933b006d9f5cec40a\", size \"27375743\" in 1.379743568s" Aug 19 00:14:02.152931 containerd[1525]: time="2025-08-19T00:14:02.152873016Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.8\" returns image reference \"sha256:2cf30e39f99f8f4ee1a736a4f3175cc2d8d3f58936d8fa83ec5523658fdc7b8b\"" Aug 19 00:14:02.153350 containerd[1525]: time="2025-08-19T00:14:02.153309088Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 19 00:14:02.182919 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 19 00:14:02.185504 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:14:02.351350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:14:02.356146 (kubelet)[2056]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 19 00:14:02.400996 kubelet[2056]: E0819 00:14:02.400853 2056 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 19 00:14:02.404108 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 19 00:14:02.404259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 19 00:14:02.405258 systemd[1]: kubelet.service: Consumed 156ms CPU time, 107.2M memory peak. Aug 19 00:14:02.806916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2671868267.mount: Deactivated successfully. Aug 19 00:14:03.460729 containerd[1525]: time="2025-08-19T00:14:03.460655118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:03.461617 containerd[1525]: time="2025-08-19T00:14:03.461567701Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Aug 19 00:14:03.462417 containerd[1525]: time="2025-08-19T00:14:03.462375065Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:03.467969 containerd[1525]: time="2025-08-19T00:14:03.467906747Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:03.468949 containerd[1525]: time="2025-08-19T00:14:03.468897127Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.31555696s" Aug 19 00:14:03.468949 containerd[1525]: time="2025-08-19T00:14:03.468938263Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Aug 19 00:14:03.469801 containerd[1525]: time="2025-08-19T00:14:03.469468601Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 19 00:14:03.927761 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount605221717.mount: Deactivated successfully. Aug 19 00:14:03.940052 containerd[1525]: time="2025-08-19T00:14:03.939989956Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:14:03.940629 containerd[1525]: time="2025-08-19T00:14:03.940601598Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Aug 19 00:14:03.941621 containerd[1525]: time="2025-08-19T00:14:03.941591051Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:14:03.943602 containerd[1525]: time="2025-08-19T00:14:03.943569351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 19 00:14:03.944216 containerd[1525]: time="2025-08-19T00:14:03.944154658Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 474.350351ms" Aug 19 00:14:03.944216 containerd[1525]: time="2025-08-19T00:14:03.944190188Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Aug 19 00:14:03.947714 containerd[1525]: time="2025-08-19T00:14:03.947496429Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 19 00:14:04.478315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1227476721.mount: Deactivated successfully. Aug 19 00:14:05.836540 containerd[1525]: time="2025-08-19T00:14:05.836478904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:05.837669 containerd[1525]: time="2025-08-19T00:14:05.837630087Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Aug 19 00:14:05.838626 containerd[1525]: time="2025-08-19T00:14:05.838569581Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:05.845612 containerd[1525]: time="2025-08-19T00:14:05.845552149Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:05.849231 containerd[1525]: time="2025-08-19T00:14:05.846893162Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.899329732s" Aug 19 00:14:05.849231 containerd[1525]: time="2025-08-19T00:14:05.846954507Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Aug 19 00:14:10.571418 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:14:10.572079 systemd[1]: kubelet.service: Consumed 156ms CPU time, 107.2M memory peak. Aug 19 00:14:10.574371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:14:10.601276 systemd[1]: Reload requested from client PID 2203 ('systemctl') (unit session-7.scope)... Aug 19 00:14:10.601295 systemd[1]: Reloading... Aug 19 00:14:10.700460 zram_generator::config[2250]: No configuration found. Aug 19 00:14:11.003040 systemd[1]: Reloading finished in 401 ms. Aug 19 00:14:11.076866 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 19 00:14:11.076963 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 19 00:14:11.077283 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:14:11.077340 systemd[1]: kubelet.service: Consumed 101ms CPU time, 95M memory peak. Aug 19 00:14:11.079136 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:14:11.235396 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:14:11.240099 (kubelet)[2292]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 00:14:11.280533 kubelet[2292]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:14:11.280533 kubelet[2292]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 00:14:11.280533 kubelet[2292]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:14:11.280533 kubelet[2292]: I0819 00:14:11.280483 2292 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 00:14:11.727618 kubelet[2292]: I0819 00:14:11.727408 2292 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 19 00:14:11.727618 kubelet[2292]: I0819 00:14:11.727466 2292 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 00:14:11.727902 kubelet[2292]: I0819 00:14:11.727824 2292 server.go:954] "Client rotation is on, will bootstrap in background" Aug 19 00:14:11.787662 kubelet[2292]: E0819 00:14:11.787590 2292 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.59:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:14:11.789582 kubelet[2292]: I0819 00:14:11.789527 2292 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 00:14:11.798669 kubelet[2292]: I0819 00:14:11.798317 2292 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 00:14:11.803798 kubelet[2292]: I0819 00:14:11.803735 2292 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 00:14:11.804194 kubelet[2292]: I0819 00:14:11.804009 2292 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 00:14:11.804299 kubelet[2292]: I0819 00:14:11.804055 2292 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 00:14:11.804436 kubelet[2292]: I0819 00:14:11.804371 2292 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 00:14:11.804436 kubelet[2292]: I0819 00:14:11.804382 2292 container_manager_linux.go:304] "Creating device plugin manager" Aug 19 00:14:11.804639 kubelet[2292]: I0819 00:14:11.804607 2292 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:14:11.810112 kubelet[2292]: I0819 00:14:11.810068 2292 kubelet.go:446] "Attempting to sync node with API server" Aug 19 00:14:11.810164 kubelet[2292]: I0819 00:14:11.810127 2292 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 00:14:11.813278 kubelet[2292]: I0819 00:14:11.813025 2292 kubelet.go:352] "Adding apiserver pod source" Aug 19 00:14:11.813278 kubelet[2292]: I0819 00:14:11.813068 2292 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 00:14:11.813898 kubelet[2292]: W0819 00:14:11.813823 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Aug 19 00:14:11.813948 kubelet[2292]: E0819 00:14:11.813896 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.59:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:14:11.815201 kubelet[2292]: W0819 00:14:11.815131 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Aug 19 00:14:11.815453 kubelet[2292]: E0819 00:14:11.815406 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.59:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:14:11.826990 kubelet[2292]: I0819 00:14:11.826954 2292 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 00:14:11.828099 kubelet[2292]: I0819 00:14:11.828014 2292 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 00:14:11.830624 kubelet[2292]: W0819 00:14:11.830583 2292 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 19 00:14:11.834655 kubelet[2292]: I0819 00:14:11.834563 2292 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 00:14:11.834655 kubelet[2292]: I0819 00:14:11.834630 2292 server.go:1287] "Started kubelet" Aug 19 00:14:11.838124 kubelet[2292]: I0819 00:14:11.836278 2292 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 00:14:11.838124 kubelet[2292]: I0819 00:14:11.836663 2292 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 00:14:11.838124 kubelet[2292]: I0819 00:14:11.836757 2292 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 00:14:11.838124 kubelet[2292]: I0819 00:14:11.836866 2292 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 00:14:11.838124 kubelet[2292]: I0819 00:14:11.837800 2292 server.go:479] "Adding debug handlers to kubelet server" Aug 19 00:14:11.840484 kubelet[2292]: I0819 00:14:11.839004 2292 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 00:14:11.841560 kubelet[2292]: I0819 00:14:11.841534 2292 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 00:14:11.841808 kubelet[2292]: I0819 00:14:11.841791 2292 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 00:14:11.841943 kubelet[2292]: I0819 00:14:11.841931 2292 reconciler.go:26] "Reconciler: start to sync state" Aug 19 00:14:11.842185 kubelet[2292]: E0819 00:14:11.842149 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 00:14:11.842398 kubelet[2292]: E0819 00:14:11.842053 2292 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.59:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.59:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.185d02bfa29d5bb4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-08-19 00:14:11.83459218 +0000 UTC m=+0.590907425,LastTimestamp:2025-08-19 00:14:11.83459218 +0000 UTC m=+0.590907425,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Aug 19 00:14:11.842752 kubelet[2292]: W0819 00:14:11.842658 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Aug 19 00:14:11.842880 kubelet[2292]: E0819 00:14:11.842854 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:14:11.843168 kubelet[2292]: E0819 00:14:11.843139 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="200ms" Aug 19 00:14:11.844914 kubelet[2292]: I0819 00:14:11.844756 2292 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 00:14:11.845149 kubelet[2292]: E0819 00:14:11.845096 2292 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 00:14:11.846395 kubelet[2292]: I0819 00:14:11.846372 2292 factory.go:221] Registration of the containerd container factory successfully Aug 19 00:14:11.846539 kubelet[2292]: I0819 00:14:11.846503 2292 factory.go:221] Registration of the systemd container factory successfully Aug 19 00:14:11.860489 kubelet[2292]: I0819 00:14:11.860282 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 00:14:11.861629 kubelet[2292]: I0819 00:14:11.861601 2292 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 00:14:11.861738 kubelet[2292]: I0819 00:14:11.861727 2292 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 19 00:14:11.861823 kubelet[2292]: I0819 00:14:11.861812 2292 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 00:14:11.861870 kubelet[2292]: I0819 00:14:11.861861 2292 kubelet.go:2382] "Starting kubelet main sync loop" Aug 19 00:14:11.861985 kubelet[2292]: E0819 00:14:11.861965 2292 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 00:14:11.866801 kubelet[2292]: I0819 00:14:11.866712 2292 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 00:14:11.866801 kubelet[2292]: I0819 00:14:11.866733 2292 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 00:14:11.866801 kubelet[2292]: I0819 00:14:11.866770 2292 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:14:11.866980 kubelet[2292]: W0819 00:14:11.866871 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Aug 19 00:14:11.866980 kubelet[2292]: E0819 00:14:11.866913 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.59:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:14:11.942491 kubelet[2292]: E0819 00:14:11.942432 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 00:14:11.962849 kubelet[2292]: E0819 00:14:11.962800 2292 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 19 00:14:12.024962 kubelet[2292]: I0819 00:14:12.024823 2292 policy_none.go:49] "None policy: Start" Aug 19 00:14:12.024962 kubelet[2292]: I0819 00:14:12.024867 2292 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 00:14:12.024962 kubelet[2292]: I0819 00:14:12.024884 2292 state_mem.go:35] "Initializing new in-memory state store" Aug 19 00:14:12.038164 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 19 00:14:12.043216 kubelet[2292]: E0819 00:14:12.043169 2292 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 00:14:12.044922 kubelet[2292]: E0819 00:14:12.044884 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="400ms" Aug 19 00:14:12.055326 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 19 00:14:12.062752 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 19 00:14:12.076668 kubelet[2292]: I0819 00:14:12.076385 2292 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 00:14:12.076668 kubelet[2292]: I0819 00:14:12.076644 2292 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 00:14:12.076854 kubelet[2292]: I0819 00:14:12.076656 2292 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 00:14:12.077219 kubelet[2292]: I0819 00:14:12.076947 2292 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 00:14:12.077991 kubelet[2292]: E0819 00:14:12.077944 2292 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 00:14:12.077991 kubelet[2292]: E0819 00:14:12.077996 2292 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Aug 19 00:14:12.184331 kubelet[2292]: I0819 00:14:12.183785 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 00:14:12.184629 kubelet[2292]: E0819 00:14:12.184581 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Aug 19 00:14:12.189544 systemd[1]: Created slice kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice - libcontainer container kubepods-burstable-poda9176403b596d0b29ae8ad12d635226d.slice. Aug 19 00:14:12.232021 kubelet[2292]: E0819 00:14:12.222937 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:14:12.232021 kubelet[2292]: E0819 00:14:12.230831 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:14:12.226999 systemd[1]: Created slice kubepods-burstable-podf733520e1d5ff7d48f98a7b6240ee048.slice - libcontainer container kubepods-burstable-podf733520e1d5ff7d48f98a7b6240ee048.slice. Aug 19 00:14:12.235553 systemd[1]: Created slice kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice - libcontainer container kubepods-burstable-poda88c9297c136b0f15880bf567e89a977.slice. Aug 19 00:14:12.238042 kubelet[2292]: E0819 00:14:12.237984 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:14:12.244226 kubelet[2292]: I0819 00:14:12.244183 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:12.244226 kubelet[2292]: I0819 00:14:12.244233 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f733520e1d5ff7d48f98a7b6240ee048-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f733520e1d5ff7d48f98a7b6240ee048\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:14:12.244386 kubelet[2292]: I0819 00:14:12.244254 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f733520e1d5ff7d48f98a7b6240ee048-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f733520e1d5ff7d48f98a7b6240ee048\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:14:12.244386 kubelet[2292]: I0819 00:14:12.244277 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f733520e1d5ff7d48f98a7b6240ee048-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f733520e1d5ff7d48f98a7b6240ee048\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:14:12.244386 kubelet[2292]: I0819 00:14:12.244298 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:12.244386 kubelet[2292]: I0819 00:14:12.244315 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:12.244386 kubelet[2292]: I0819 00:14:12.244330 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:12.244516 kubelet[2292]: I0819 00:14:12.244347 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:12.244516 kubelet[2292]: I0819 00:14:12.244363 2292 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Aug 19 00:14:12.387040 kubelet[2292]: I0819 00:14:12.386560 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 00:14:12.387040 kubelet[2292]: E0819 00:14:12.386929 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Aug 19 00:14:12.446592 kubelet[2292]: E0819 00:14:12.446532 2292 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.59:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.59:6443: connect: connection refused" interval="800ms" Aug 19 00:14:12.534053 kubelet[2292]: E0819 00:14:12.534010 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:12.535261 containerd[1525]: time="2025-08-19T00:14:12.534877378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,}" Aug 19 00:14:12.539549 kubelet[2292]: E0819 00:14:12.539129 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:12.539824 containerd[1525]: time="2025-08-19T00:14:12.539771041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,}" Aug 19 00:14:12.541002 kubelet[2292]: E0819 00:14:12.540963 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:12.541649 containerd[1525]: time="2025-08-19T00:14:12.541499154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f733520e1d5ff7d48f98a7b6240ee048,Namespace:kube-system,Attempt:0,}" Aug 19 00:14:12.619484 containerd[1525]: time="2025-08-19T00:14:12.619428044Z" level=info msg="connecting to shim 87176591cc981ffa8459c96f2bb07f96fb93b30efc777cd6e0696260cb03f833" address="unix:///run/containerd/s/bf53ef1ad882c5c6c2794dad68d18ea528487614c933d29515cac760f1dfef94" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:14:12.651458 containerd[1525]: time="2025-08-19T00:14:12.651289041Z" level=info msg="connecting to shim 3931a6b67d7b94da5077007308adc937d60d08f1cfb0a23838f74e317454cae4" address="unix:///run/containerd/s/1f9ca1efe28c12c4be8ac4b829e2d82dce767e19638f704f76e8ff4e56fa7f88" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:14:12.658871 containerd[1525]: time="2025-08-19T00:14:12.658468076Z" level=info msg="connecting to shim 491fffb89ef9865ab4d26e866e0746ddd2b11d88a12f2c1a513db7078bc2e373" address="unix:///run/containerd/s/52ec76ee46d184c577c576a2ccdb30529cbfc3123165613760b78c7825ad090a" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:14:12.667520 systemd[1]: Started cri-containerd-87176591cc981ffa8459c96f2bb07f96fb93b30efc777cd6e0696260cb03f833.scope - libcontainer container 87176591cc981ffa8459c96f2bb07f96fb93b30efc777cd6e0696260cb03f833. Aug 19 00:14:12.698579 systemd[1]: Started cri-containerd-3931a6b67d7b94da5077007308adc937d60d08f1cfb0a23838f74e317454cae4.scope - libcontainer container 3931a6b67d7b94da5077007308adc937d60d08f1cfb0a23838f74e317454cae4. Aug 19 00:14:12.703966 systemd[1]: Started cri-containerd-491fffb89ef9865ab4d26e866e0746ddd2b11d88a12f2c1a513db7078bc2e373.scope - libcontainer container 491fffb89ef9865ab4d26e866e0746ddd2b11d88a12f2c1a513db7078bc2e373. Aug 19 00:14:12.737668 containerd[1525]: time="2025-08-19T00:14:12.737591841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a9176403b596d0b29ae8ad12d635226d,Namespace:kube-system,Attempt:0,} returns sandbox id \"87176591cc981ffa8459c96f2bb07f96fb93b30efc777cd6e0696260cb03f833\"" Aug 19 00:14:12.740033 kubelet[2292]: E0819 00:14:12.739996 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:12.744246 containerd[1525]: time="2025-08-19T00:14:12.744192365Z" level=info msg="CreateContainer within sandbox \"87176591cc981ffa8459c96f2bb07f96fb93b30efc777cd6e0696260cb03f833\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 19 00:14:12.759996 containerd[1525]: time="2025-08-19T00:14:12.759921867Z" level=info msg="Container f0aa18e5ef497b3eac7f269734baa5cff67c51aeb93241698a17680ad1ca7c6e: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:12.760752 containerd[1525]: time="2025-08-19T00:14:12.760567503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:f733520e1d5ff7d48f98a7b6240ee048,Namespace:kube-system,Attempt:0,} returns sandbox id \"3931a6b67d7b94da5077007308adc937d60d08f1cfb0a23838f74e317454cae4\"" Aug 19 00:14:12.761667 kubelet[2292]: E0819 00:14:12.761639 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:12.762546 containerd[1525]: time="2025-08-19T00:14:12.762507541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a88c9297c136b0f15880bf567e89a977,Namespace:kube-system,Attempt:0,} returns sandbox id \"491fffb89ef9865ab4d26e866e0746ddd2b11d88a12f2c1a513db7078bc2e373\"" Aug 19 00:14:12.763311 kubelet[2292]: E0819 00:14:12.763282 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:12.763403 kubelet[2292]: W0819 00:14:12.763339 2292 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.59:6443: connect: connection refused Aug 19 00:14:12.763433 kubelet[2292]: E0819 00:14:12.763410 2292 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.59:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.59:6443: connect: connection refused" logger="UnhandledError" Aug 19 00:14:12.763972 containerd[1525]: time="2025-08-19T00:14:12.763869309Z" level=info msg="CreateContainer within sandbox \"3931a6b67d7b94da5077007308adc937d60d08f1cfb0a23838f74e317454cae4\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 19 00:14:12.766443 containerd[1525]: time="2025-08-19T00:14:12.766402454Z" level=info msg="CreateContainer within sandbox \"491fffb89ef9865ab4d26e866e0746ddd2b11d88a12f2c1a513db7078bc2e373\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 19 00:14:12.775657 containerd[1525]: time="2025-08-19T00:14:12.775600683Z" level=info msg="CreateContainer within sandbox \"87176591cc981ffa8459c96f2bb07f96fb93b30efc777cd6e0696260cb03f833\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f0aa18e5ef497b3eac7f269734baa5cff67c51aeb93241698a17680ad1ca7c6e\"" Aug 19 00:14:12.776574 containerd[1525]: time="2025-08-19T00:14:12.776532988Z" level=info msg="StartContainer for \"f0aa18e5ef497b3eac7f269734baa5cff67c51aeb93241698a17680ad1ca7c6e\"" Aug 19 00:14:12.778129 containerd[1525]: time="2025-08-19T00:14:12.777999495Z" level=info msg="connecting to shim f0aa18e5ef497b3eac7f269734baa5cff67c51aeb93241698a17680ad1ca7c6e" address="unix:///run/containerd/s/bf53ef1ad882c5c6c2794dad68d18ea528487614c933d29515cac760f1dfef94" protocol=ttrpc version=3 Aug 19 00:14:12.782841 containerd[1525]: time="2025-08-19T00:14:12.782769893Z" level=info msg="Container 017d2493e7d8bc08fba70485a2ed4aa547c3634f920dcc40d067eb20ecc98733: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:12.787019 containerd[1525]: time="2025-08-19T00:14:12.786543586Z" level=info msg="Container 58d42a4f4c94192d67f58ff339755a6a69ceefda2851d70a385796f1abd2fd51: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:12.789374 kubelet[2292]: I0819 00:14:12.789334 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 00:14:12.790123 kubelet[2292]: E0819 00:14:12.790061 2292 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.59:6443/api/v1/nodes\": dial tcp 10.0.0.59:6443: connect: connection refused" node="localhost" Aug 19 00:14:12.792918 containerd[1525]: time="2025-08-19T00:14:12.792797172Z" level=info msg="CreateContainer within sandbox \"3931a6b67d7b94da5077007308adc937d60d08f1cfb0a23838f74e317454cae4\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"017d2493e7d8bc08fba70485a2ed4aa547c3634f920dcc40d067eb20ecc98733\"" Aug 19 00:14:12.793598 containerd[1525]: time="2025-08-19T00:14:12.793566435Z" level=info msg="StartContainer for \"017d2493e7d8bc08fba70485a2ed4aa547c3634f920dcc40d067eb20ecc98733\"" Aug 19 00:14:12.794693 containerd[1525]: time="2025-08-19T00:14:12.794660901Z" level=info msg="connecting to shim 017d2493e7d8bc08fba70485a2ed4aa547c3634f920dcc40d067eb20ecc98733" address="unix:///run/containerd/s/1f9ca1efe28c12c4be8ac4b829e2d82dce767e19638f704f76e8ff4e56fa7f88" protocol=ttrpc version=3 Aug 19 00:14:12.799257 containerd[1525]: time="2025-08-19T00:14:12.799203897Z" level=info msg="CreateContainer within sandbox \"491fffb89ef9865ab4d26e866e0746ddd2b11d88a12f2c1a513db7078bc2e373\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"58d42a4f4c94192d67f58ff339755a6a69ceefda2851d70a385796f1abd2fd51\"" Aug 19 00:14:12.799892 containerd[1525]: time="2025-08-19T00:14:12.799833374Z" level=info msg="StartContainer for \"58d42a4f4c94192d67f58ff339755a6a69ceefda2851d70a385796f1abd2fd51\"" Aug 19 00:14:12.801318 containerd[1525]: time="2025-08-19T00:14:12.801174771Z" level=info msg="connecting to shim 58d42a4f4c94192d67f58ff339755a6a69ceefda2851d70a385796f1abd2fd51" address="unix:///run/containerd/s/52ec76ee46d184c577c576a2ccdb30529cbfc3123165613760b78c7825ad090a" protocol=ttrpc version=3 Aug 19 00:14:12.805343 systemd[1]: Started cri-containerd-f0aa18e5ef497b3eac7f269734baa5cff67c51aeb93241698a17680ad1ca7c6e.scope - libcontainer container f0aa18e5ef497b3eac7f269734baa5cff67c51aeb93241698a17680ad1ca7c6e. Aug 19 00:14:12.821392 systemd[1]: Started cri-containerd-017d2493e7d8bc08fba70485a2ed4aa547c3634f920dcc40d067eb20ecc98733.scope - libcontainer container 017d2493e7d8bc08fba70485a2ed4aa547c3634f920dcc40d067eb20ecc98733. Aug 19 00:14:12.826742 systemd[1]: Started cri-containerd-58d42a4f4c94192d67f58ff339755a6a69ceefda2851d70a385796f1abd2fd51.scope - libcontainer container 58d42a4f4c94192d67f58ff339755a6a69ceefda2851d70a385796f1abd2fd51. Aug 19 00:14:12.865898 containerd[1525]: time="2025-08-19T00:14:12.865785162Z" level=info msg="StartContainer for \"f0aa18e5ef497b3eac7f269734baa5cff67c51aeb93241698a17680ad1ca7c6e\" returns successfully" Aug 19 00:14:12.887956 kubelet[2292]: E0819 00:14:12.887916 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:14:12.888633 kubelet[2292]: E0819 00:14:12.888563 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:12.921782 containerd[1525]: time="2025-08-19T00:14:12.913561360Z" level=info msg="StartContainer for \"017d2493e7d8bc08fba70485a2ed4aa547c3634f920dcc40d067eb20ecc98733\" returns successfully" Aug 19 00:14:12.957267 containerd[1525]: time="2025-08-19T00:14:12.946617673Z" level=info msg="StartContainer for \"58d42a4f4c94192d67f58ff339755a6a69ceefda2851d70a385796f1abd2fd51\" returns successfully" Aug 19 00:14:13.595055 kubelet[2292]: I0819 00:14:13.595020 2292 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 00:14:13.920684 kubelet[2292]: E0819 00:14:13.920558 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:14:13.920781 kubelet[2292]: E0819 00:14:13.920725 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:13.927738 kubelet[2292]: E0819 00:14:13.927431 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:14:13.927738 kubelet[2292]: E0819 00:14:13.927587 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:13.928587 kubelet[2292]: E0819 00:14:13.928555 2292 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Aug 19 00:14:13.928772 kubelet[2292]: E0819 00:14:13.928712 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:14.556535 kubelet[2292]: E0819 00:14:14.556425 2292 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Aug 19 00:14:14.627763 kubelet[2292]: I0819 00:14:14.627707 2292 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 19 00:14:14.643495 kubelet[2292]: I0819 00:14:14.643409 2292 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 00:14:14.670357 kubelet[2292]: E0819 00:14:14.670082 2292 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 19 00:14:14.670357 kubelet[2292]: I0819 00:14:14.670136 2292 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:14.672572 kubelet[2292]: E0819 00:14:14.672522 2292 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:14.672572 kubelet[2292]: I0819 00:14:14.672558 2292 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 19 00:14:14.674963 kubelet[2292]: E0819 00:14:14.674918 2292 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 19 00:14:14.814897 kubelet[2292]: I0819 00:14:14.814760 2292 apiserver.go:52] "Watching apiserver" Aug 19 00:14:14.842222 kubelet[2292]: I0819 00:14:14.842174 2292 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 19 00:14:14.925444 kubelet[2292]: I0819 00:14:14.925395 2292 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:14.926202 kubelet[2292]: I0819 00:14:14.926181 2292 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 19 00:14:14.927328 kubelet[2292]: I0819 00:14:14.927302 2292 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 00:14:14.928261 kubelet[2292]: E0819 00:14:14.928153 2292 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:14.928491 kubelet[2292]: E0819 00:14:14.928365 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:14.929248 kubelet[2292]: E0819 00:14:14.929220 2292 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Aug 19 00:14:14.930319 kubelet[2292]: E0819 00:14:14.930279 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:14.931066 kubelet[2292]: E0819 00:14:14.931040 2292 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Aug 19 00:14:14.931267 kubelet[2292]: E0819 00:14:14.931233 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:15.927333 kubelet[2292]: I0819 00:14:15.927296 2292 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 00:14:15.938975 kubelet[2292]: E0819 00:14:15.938942 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:16.928928 kubelet[2292]: E0819 00:14:16.928889 2292 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:17.215542 systemd[1]: Reload requested from client PID 2572 ('systemctl') (unit session-7.scope)... Aug 19 00:14:17.215560 systemd[1]: Reloading... Aug 19 00:14:17.296217 zram_generator::config[2618]: No configuration found. Aug 19 00:14:17.478008 systemd[1]: Reloading finished in 262 ms. Aug 19 00:14:17.507583 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:14:17.523589 systemd[1]: kubelet.service: Deactivated successfully. Aug 19 00:14:17.523823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:14:17.523881 systemd[1]: kubelet.service: Consumed 1.087s CPU time, 128.4M memory peak. Aug 19 00:14:17.526644 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 19 00:14:17.711379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 19 00:14:17.725558 (kubelet)[2657]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 19 00:14:17.781353 kubelet[2657]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:14:17.781353 kubelet[2657]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 19 00:14:17.781353 kubelet[2657]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 19 00:14:17.781936 kubelet[2657]: I0819 00:14:17.781775 2657 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 19 00:14:17.790002 kubelet[2657]: I0819 00:14:17.789954 2657 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 19 00:14:17.790002 kubelet[2657]: I0819 00:14:17.789996 2657 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 19 00:14:17.791066 kubelet[2657]: I0819 00:14:17.790581 2657 server.go:954] "Client rotation is on, will bootstrap in background" Aug 19 00:14:17.793362 kubelet[2657]: I0819 00:14:17.793317 2657 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 19 00:14:17.797021 kubelet[2657]: I0819 00:14:17.796971 2657 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 19 00:14:17.806991 kubelet[2657]: I0819 00:14:17.806705 2657 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 19 00:14:17.811807 kubelet[2657]: I0819 00:14:17.811696 2657 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 19 00:14:17.812079 kubelet[2657]: I0819 00:14:17.812020 2657 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 19 00:14:17.812288 kubelet[2657]: I0819 00:14:17.812058 2657 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 19 00:14:17.812393 kubelet[2657]: I0819 00:14:17.812290 2657 topology_manager.go:138] "Creating topology manager with none policy" Aug 19 00:14:17.812393 kubelet[2657]: I0819 00:14:17.812302 2657 container_manager_linux.go:304] "Creating device plugin manager" Aug 19 00:14:17.812393 kubelet[2657]: I0819 00:14:17.812351 2657 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:14:17.812525 kubelet[2657]: I0819 00:14:17.812511 2657 kubelet.go:446] "Attempting to sync node with API server" Aug 19 00:14:17.812584 kubelet[2657]: I0819 00:14:17.812529 2657 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 19 00:14:17.812584 kubelet[2657]: I0819 00:14:17.812555 2657 kubelet.go:352] "Adding apiserver pod source" Aug 19 00:14:17.812584 kubelet[2657]: I0819 00:14:17.812566 2657 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 19 00:14:17.814006 kubelet[2657]: I0819 00:14:17.813972 2657 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Aug 19 00:14:17.817683 kubelet[2657]: I0819 00:14:17.817605 2657 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 19 00:14:17.820147 kubelet[2657]: I0819 00:14:17.818283 2657 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 19 00:14:17.820147 kubelet[2657]: I0819 00:14:17.818334 2657 server.go:1287] "Started kubelet" Aug 19 00:14:17.820147 kubelet[2657]: I0819 00:14:17.818625 2657 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 19 00:14:17.820147 kubelet[2657]: I0819 00:14:17.819581 2657 server.go:479] "Adding debug handlers to kubelet server" Aug 19 00:14:17.820147 kubelet[2657]: I0819 00:14:17.819858 2657 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 19 00:14:17.820413 kubelet[2657]: I0819 00:14:17.820348 2657 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 19 00:14:17.822166 kubelet[2657]: I0819 00:14:17.822127 2657 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 19 00:14:17.826172 kubelet[2657]: I0819 00:14:17.826147 2657 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 19 00:14:17.827249 kubelet[2657]: I0819 00:14:17.827221 2657 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 19 00:14:17.832354 kubelet[2657]: I0819 00:14:17.832324 2657 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 19 00:14:17.832671 kubelet[2657]: I0819 00:14:17.832654 2657 reconciler.go:26] "Reconciler: start to sync state" Aug 19 00:14:17.841471 kubelet[2657]: E0819 00:14:17.841423 2657 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Aug 19 00:14:17.849128 kubelet[2657]: I0819 00:14:17.849064 2657 factory.go:221] Registration of the systemd container factory successfully Aug 19 00:14:17.849284 kubelet[2657]: I0819 00:14:17.849250 2657 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 19 00:14:17.852082 kubelet[2657]: E0819 00:14:17.852032 2657 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 19 00:14:17.857468 kubelet[2657]: I0819 00:14:17.857420 2657 factory.go:221] Registration of the containerd container factory successfully Aug 19 00:14:17.861302 kubelet[2657]: I0819 00:14:17.861208 2657 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 19 00:14:17.864152 kubelet[2657]: I0819 00:14:17.864091 2657 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 19 00:14:17.864152 kubelet[2657]: I0819 00:14:17.864141 2657 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 19 00:14:17.864251 kubelet[2657]: I0819 00:14:17.864165 2657 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 19 00:14:17.864251 kubelet[2657]: I0819 00:14:17.864174 2657 kubelet.go:2382] "Starting kubelet main sync loop" Aug 19 00:14:17.864251 kubelet[2657]: E0819 00:14:17.864218 2657 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 19 00:14:17.912966 kubelet[2657]: I0819 00:14:17.912912 2657 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 19 00:14:17.912966 kubelet[2657]: I0819 00:14:17.912949 2657 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 19 00:14:17.912966 kubelet[2657]: I0819 00:14:17.912977 2657 state_mem.go:36] "Initialized new in-memory state store" Aug 19 00:14:17.913219 kubelet[2657]: I0819 00:14:17.913201 2657 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 19 00:14:17.913260 kubelet[2657]: I0819 00:14:17.913217 2657 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 19 00:14:17.913260 kubelet[2657]: I0819 00:14:17.913238 2657 policy_none.go:49] "None policy: Start" Aug 19 00:14:17.913260 kubelet[2657]: I0819 00:14:17.913250 2657 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 19 00:14:17.913333 kubelet[2657]: I0819 00:14:17.913259 2657 state_mem.go:35] "Initializing new in-memory state store" Aug 19 00:14:17.913377 kubelet[2657]: I0819 00:14:17.913366 2657 state_mem.go:75] "Updated machine memory state" Aug 19 00:14:17.918773 kubelet[2657]: I0819 00:14:17.918743 2657 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 19 00:14:17.919303 kubelet[2657]: I0819 00:14:17.919204 2657 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 19 00:14:17.919303 kubelet[2657]: I0819 00:14:17.919221 2657 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 19 00:14:17.919504 kubelet[2657]: I0819 00:14:17.919474 2657 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 19 00:14:17.921808 kubelet[2657]: E0819 00:14:17.921354 2657 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 19 00:14:17.965946 kubelet[2657]: I0819 00:14:17.965597 2657 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 00:14:17.965946 kubelet[2657]: I0819 00:14:17.965747 2657 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:17.965946 kubelet[2657]: I0819 00:14:17.965597 2657 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Aug 19 00:14:17.979125 kubelet[2657]: E0819 00:14:17.979067 2657 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 19 00:14:18.023940 kubelet[2657]: I0819 00:14:18.023884 2657 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Aug 19 00:14:18.033516 kubelet[2657]: I0819 00:14:18.033392 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:18.033516 kubelet[2657]: I0819 00:14:18.033427 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:18.033516 kubelet[2657]: I0819 00:14:18.033454 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a9176403b596d0b29ae8ad12d635226d-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a9176403b596d0b29ae8ad12d635226d\") " pod="kube-system/kube-scheduler-localhost" Aug 19 00:14:18.033516 kubelet[2657]: I0819 00:14:18.033470 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f733520e1d5ff7d48f98a7b6240ee048-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"f733520e1d5ff7d48f98a7b6240ee048\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:14:18.033516 kubelet[2657]: I0819 00:14:18.033492 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f733520e1d5ff7d48f98a7b6240ee048-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"f733520e1d5ff7d48f98a7b6240ee048\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:14:18.033706 kubelet[2657]: I0819 00:14:18.033508 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:18.033706 kubelet[2657]: I0819 00:14:18.033525 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:18.033706 kubelet[2657]: I0819 00:14:18.033541 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f733520e1d5ff7d48f98a7b6240ee048-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"f733520e1d5ff7d48f98a7b6240ee048\") " pod="kube-system/kube-apiserver-localhost" Aug 19 00:14:18.033706 kubelet[2657]: I0819 00:14:18.033556 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a88c9297c136b0f15880bf567e89a977-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a88c9297c136b0f15880bf567e89a977\") " pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:18.051648 kubelet[2657]: I0819 00:14:18.051610 2657 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Aug 19 00:14:18.051774 kubelet[2657]: I0819 00:14:18.051710 2657 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Aug 19 00:14:18.216796 sudo[2694]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Aug 19 00:14:18.217573 sudo[2694]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Aug 19 00:14:18.278338 kubelet[2657]: E0819 00:14:18.278294 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:18.278521 kubelet[2657]: E0819 00:14:18.278497 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:18.279784 kubelet[2657]: E0819 00:14:18.279760 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:18.538413 sudo[2694]: pam_unix(sudo:session): session closed for user root Aug 19 00:14:18.813286 kubelet[2657]: I0819 00:14:18.813134 2657 apiserver.go:52] "Watching apiserver" Aug 19 00:14:18.833303 kubelet[2657]: I0819 00:14:18.833243 2657 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 19 00:14:18.885920 kubelet[2657]: I0819 00:14:18.885675 2657 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:18.886460 kubelet[2657]: I0819 00:14:18.886398 2657 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Aug 19 00:14:18.886902 kubelet[2657]: E0819 00:14:18.886881 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:18.951441 kubelet[2657]: E0819 00:14:18.950883 2657 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Aug 19 00:14:18.954025 kubelet[2657]: E0819 00:14:18.952376 2657 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Aug 19 00:14:18.954025 kubelet[2657]: E0819 00:14:18.952555 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:18.954530 kubelet[2657]: E0819 00:14:18.954503 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:18.975407 kubelet[2657]: I0819 00:14:18.975345 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9753274159999998 podStartE2EDuration="1.975327416s" podCreationTimestamp="2025-08-19 00:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:14:18.974862219 +0000 UTC m=+1.245675801" watchObservedRunningTime="2025-08-19 00:14:18.975327416 +0000 UTC m=+1.246141079" Aug 19 00:14:18.987039 kubelet[2657]: I0819 00:14:18.986954 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.986912733 podStartE2EDuration="3.986912733s" podCreationTimestamp="2025-08-19 00:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:14:18.986681676 +0000 UTC m=+1.257495258" watchObservedRunningTime="2025-08-19 00:14:18.986912733 +0000 UTC m=+1.257726275" Aug 19 00:14:18.998163 kubelet[2657]: I0819 00:14:18.997454 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.997437311 podStartE2EDuration="1.997437311s" podCreationTimestamp="2025-08-19 00:14:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:14:18.997229199 +0000 UTC m=+1.268042781" watchObservedRunningTime="2025-08-19 00:14:18.997437311 +0000 UTC m=+1.268250893" Aug 19 00:14:19.887604 kubelet[2657]: E0819 00:14:19.887394 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:19.887604 kubelet[2657]: E0819 00:14:19.887529 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:19.888396 kubelet[2657]: E0819 00:14:19.888343 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:20.888968 kubelet[2657]: E0819 00:14:20.888935 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:21.302448 sudo[1735]: pam_unix(sudo:session): session closed for user root Aug 19 00:14:21.304260 sshd[1734]: Connection closed by 10.0.0.1 port 36980 Aug 19 00:14:21.305003 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:21.308970 systemd[1]: sshd@6-10.0.0.59:22-10.0.0.1:36980.service: Deactivated successfully. Aug 19 00:14:21.311102 systemd[1]: session-7.scope: Deactivated successfully. Aug 19 00:14:21.311439 systemd[1]: session-7.scope: Consumed 7.908s CPU time, 258.1M memory peak. Aug 19 00:14:21.313088 systemd-logind[1510]: Session 7 logged out. Waiting for processes to exit. Aug 19 00:14:21.315742 systemd-logind[1510]: Removed session 7. Aug 19 00:14:22.388162 kubelet[2657]: I0819 00:14:22.388089 2657 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 19 00:14:22.388960 containerd[1525]: time="2025-08-19T00:14:22.388832300Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 19 00:14:22.389926 kubelet[2657]: I0819 00:14:22.389201 2657 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 19 00:14:23.293897 systemd[1]: Created slice kubepods-besteffort-pod4434b755_63da_4e89_94f7_6aadde009e76.slice - libcontainer container kubepods-besteffort-pod4434b755_63da_4e89_94f7_6aadde009e76.slice. Aug 19 00:14:23.321055 systemd[1]: Created slice kubepods-burstable-podbb4c2141_3b18_4b51_ba8d_63a1c2326c70.slice - libcontainer container kubepods-burstable-podbb4c2141_3b18_4b51_ba8d_63a1c2326c70.slice. Aug 19 00:14:23.364299 kubelet[2657]: I0819 00:14:23.364252 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-bpf-maps\") pod \"cilium-9hqd9\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " pod="kube-system/cilium-9hqd9" Aug 19 00:14:23.364299 kubelet[2657]: I0819 00:14:23.364301 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cilium-cgroup\") pod \"cilium-9hqd9\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " pod="kube-system/cilium-9hqd9" Aug 19 00:14:23.364502 kubelet[2657]: I0819 00:14:23.364326 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4434b755-63da-4e89-94f7-6aadde009e76-kube-proxy\") pod \"kube-proxy-hq9qq\" (UID: \"4434b755-63da-4e89-94f7-6aadde009e76\") " pod="kube-system/kube-proxy-hq9qq" Aug 19 00:14:23.364502 kubelet[2657]: I0819 00:14:23.364346 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9rng\" (UniqueName: \"kubernetes.io/projected/4434b755-63da-4e89-94f7-6aadde009e76-kube-api-access-q9rng\") pod \"kube-proxy-hq9qq\" (UID: \"4434b755-63da-4e89-94f7-6aadde009e76\") " pod="kube-system/kube-proxy-hq9qq" Aug 19 00:14:23.364502 kubelet[2657]: I0819 00:14:23.364366 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-clustermesh-secrets\") pod \"cilium-9hqd9\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " pod="kube-system/cilium-9hqd9" Aug 19 00:14:23.364502 kubelet[2657]: I0819 00:14:23.364392 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-host-proc-sys-net\") pod \"cilium-9hqd9\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " pod="kube-system/cilium-9hqd9" Aug 19 00:14:23.364502 kubelet[2657]: I0819 00:14:23.364410 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-hubble-tls\") pod \"cilium-9hqd9\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " pod="kube-system/cilium-9hqd9" Aug 19 00:14:23.364604 kubelet[2657]: I0819 00:14:23.364427 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cilium-config-path\") pod \"cilium-9hqd9\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " pod="kube-system/cilium-9hqd9" Aug 19 00:14:23.364604 kubelet[2657]: I0819 00:14:23.364441 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-host-proc-sys-kernel\") pod \"cilium-9hqd9\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " pod="kube-system/cilium-9hqd9" Aug 19 00:14:23.364604 kubelet[2657]: I0819 00:14:23.364461 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-lib-modules\") pod \"cilium-9hqd9\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " pod="kube-system/cilium-9hqd9" Aug 19 00:14:23.364604 kubelet[2657]: I0819 00:14:23.364500 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-hostproc\") pod \"cilium-9hqd9\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " pod="kube-system/cilium-9hqd9" Aug 19 00:14:23.364604 kubelet[2657]: I0819 00:14:23.364516 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-xtables-lock\") pod \"cilium-9hqd9\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " pod="kube-system/cilium-9hqd9" Aug 19 00:14:23.364604 kubelet[2657]: I0819 00:14:23.364531 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlmvw\" (UniqueName: \"kubernetes.io/projected/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-kube-api-access-qlmvw\") pod \"cilium-9hqd9\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " pod="kube-system/cilium-9hqd9" Aug 19 00:14:23.364715 kubelet[2657]: I0819 00:14:23.364545 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cni-path\") pod \"cilium-9hqd9\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " pod="kube-system/cilium-9hqd9" Aug 19 00:14:23.364715 kubelet[2657]: I0819 00:14:23.364561 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-etc-cni-netd\") pod \"cilium-9hqd9\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " pod="kube-system/cilium-9hqd9" Aug 19 00:14:23.364715 kubelet[2657]: I0819 00:14:23.364576 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4434b755-63da-4e89-94f7-6aadde009e76-xtables-lock\") pod \"kube-proxy-hq9qq\" (UID: \"4434b755-63da-4e89-94f7-6aadde009e76\") " pod="kube-system/kube-proxy-hq9qq" Aug 19 00:14:23.364715 kubelet[2657]: I0819 00:14:23.364600 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cilium-run\") pod \"cilium-9hqd9\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " pod="kube-system/cilium-9hqd9" Aug 19 00:14:23.364715 kubelet[2657]: I0819 00:14:23.364620 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4434b755-63da-4e89-94f7-6aadde009e76-lib-modules\") pod \"kube-proxy-hq9qq\" (UID: \"4434b755-63da-4e89-94f7-6aadde009e76\") " pod="kube-system/kube-proxy-hq9qq" Aug 19 00:14:23.513021 systemd[1]: Created slice kubepods-besteffort-pod0132b988_8387_4c2f_b504_7a99353c7054.slice - libcontainer container kubepods-besteffort-pod0132b988_8387_4c2f_b504_7a99353c7054.slice. Aug 19 00:14:23.566749 kubelet[2657]: I0819 00:14:23.566618 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0132b988-8387-4c2f-b504-7a99353c7054-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-w69pf\" (UID: \"0132b988-8387-4c2f-b504-7a99353c7054\") " pod="kube-system/cilium-operator-6c4d7847fc-w69pf" Aug 19 00:14:23.566749 kubelet[2657]: I0819 00:14:23.566668 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vjzr\" (UniqueName: \"kubernetes.io/projected/0132b988-8387-4c2f-b504-7a99353c7054-kube-api-access-7vjzr\") pod \"cilium-operator-6c4d7847fc-w69pf\" (UID: \"0132b988-8387-4c2f-b504-7a99353c7054\") " pod="kube-system/cilium-operator-6c4d7847fc-w69pf" Aug 19 00:14:23.609952 kubelet[2657]: E0819 00:14:23.609905 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:23.610779 containerd[1525]: time="2025-08-19T00:14:23.610516225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hq9qq,Uid:4434b755-63da-4e89-94f7-6aadde009e76,Namespace:kube-system,Attempt:0,}" Aug 19 00:14:23.628275 kubelet[2657]: E0819 00:14:23.628238 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:23.629148 containerd[1525]: time="2025-08-19T00:14:23.628866026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9hqd9,Uid:bb4c2141-3b18-4b51-ba8d-63a1c2326c70,Namespace:kube-system,Attempt:0,}" Aug 19 00:14:23.636618 containerd[1525]: time="2025-08-19T00:14:23.636575546Z" level=info msg="connecting to shim 3164a051d7b707832588ecfce66d7e22b098254d9e1af083089fa81d91101a7b" address="unix:///run/containerd/s/548872cfa37b876959ca29c082986db49d88022804a42c2e212438d711224001" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:14:23.650791 containerd[1525]: time="2025-08-19T00:14:23.650735977Z" level=info msg="connecting to shim 3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d" address="unix:///run/containerd/s/b4bdc2f5d6b7de90f9444845d2f5be8fee44f2edd8cb043093fd22dc6f41fb16" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:14:23.663296 systemd[1]: Started cri-containerd-3164a051d7b707832588ecfce66d7e22b098254d9e1af083089fa81d91101a7b.scope - libcontainer container 3164a051d7b707832588ecfce66d7e22b098254d9e1af083089fa81d91101a7b. Aug 19 00:14:23.673719 systemd[1]: Started cri-containerd-3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d.scope - libcontainer container 3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d. Aug 19 00:14:23.702171 containerd[1525]: time="2025-08-19T00:14:23.702124580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9hqd9,Uid:bb4c2141-3b18-4b51-ba8d-63a1c2326c70,Namespace:kube-system,Attempt:0,} returns sandbox id \"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\"" Aug 19 00:14:23.704125 kubelet[2657]: E0819 00:14:23.704089 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:23.706014 containerd[1525]: time="2025-08-19T00:14:23.705976157Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Aug 19 00:14:23.714395 containerd[1525]: time="2025-08-19T00:14:23.714267617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hq9qq,Uid:4434b755-63da-4e89-94f7-6aadde009e76,Namespace:kube-system,Attempt:0,} returns sandbox id \"3164a051d7b707832588ecfce66d7e22b098254d9e1af083089fa81d91101a7b\"" Aug 19 00:14:23.715244 kubelet[2657]: E0819 00:14:23.715221 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:23.717443 containerd[1525]: time="2025-08-19T00:14:23.717351241Z" level=info msg="CreateContainer within sandbox \"3164a051d7b707832588ecfce66d7e22b098254d9e1af083089fa81d91101a7b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 19 00:14:23.728949 containerd[1525]: time="2025-08-19T00:14:23.728312100Z" level=info msg="Container 8eb7ac632d321de7f65d1963eee48ed7d93dde4b9b8dc2c3785ef8dd7e30f227: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:23.735913 containerd[1525]: time="2025-08-19T00:14:23.735856386Z" level=info msg="CreateContainer within sandbox \"3164a051d7b707832588ecfce66d7e22b098254d9e1af083089fa81d91101a7b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8eb7ac632d321de7f65d1963eee48ed7d93dde4b9b8dc2c3785ef8dd7e30f227\"" Aug 19 00:14:23.736514 containerd[1525]: time="2025-08-19T00:14:23.736483248Z" level=info msg="StartContainer for \"8eb7ac632d321de7f65d1963eee48ed7d93dde4b9b8dc2c3785ef8dd7e30f227\"" Aug 19 00:14:23.740035 containerd[1525]: time="2025-08-19T00:14:23.739975051Z" level=info msg="connecting to shim 8eb7ac632d321de7f65d1963eee48ed7d93dde4b9b8dc2c3785ef8dd7e30f227" address="unix:///run/containerd/s/548872cfa37b876959ca29c082986db49d88022804a42c2e212438d711224001" protocol=ttrpc version=3 Aug 19 00:14:23.772336 systemd[1]: Started cri-containerd-8eb7ac632d321de7f65d1963eee48ed7d93dde4b9b8dc2c3785ef8dd7e30f227.scope - libcontainer container 8eb7ac632d321de7f65d1963eee48ed7d93dde4b9b8dc2c3785ef8dd7e30f227. Aug 19 00:14:23.819832 kubelet[2657]: E0819 00:14:23.818091 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:23.819929 containerd[1525]: time="2025-08-19T00:14:23.818746724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w69pf,Uid:0132b988-8387-4c2f-b504-7a99353c7054,Namespace:kube-system,Attempt:0,}" Aug 19 00:14:23.825454 containerd[1525]: time="2025-08-19T00:14:23.824742613Z" level=info msg="StartContainer for \"8eb7ac632d321de7f65d1963eee48ed7d93dde4b9b8dc2c3785ef8dd7e30f227\" returns successfully" Aug 19 00:14:23.896631 kubelet[2657]: E0819 00:14:23.896577 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:23.967917 containerd[1525]: time="2025-08-19T00:14:23.966570325Z" level=info msg="connecting to shim 468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7" address="unix:///run/containerd/s/0c1754856051f7cdc3edde0438d7dd0aa8b7c1abd9fcedde5ad5d8e73cc7614e" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:14:23.991387 systemd[1]: Started cri-containerd-468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7.scope - libcontainer container 468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7. Aug 19 00:14:24.040018 containerd[1525]: time="2025-08-19T00:14:24.039934350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-w69pf,Uid:0132b988-8387-4c2f-b504-7a99353c7054,Namespace:kube-system,Attempt:0,} returns sandbox id \"468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7\"" Aug 19 00:14:24.041288 kubelet[2657]: E0819 00:14:24.041264 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:26.755662 kubelet[2657]: E0819 00:14:26.755627 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:26.815584 kubelet[2657]: I0819 00:14:26.815461 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hq9qq" podStartSLOduration=3.815441207 podStartE2EDuration="3.815441207s" podCreationTimestamp="2025-08-19 00:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:14:23.95434313 +0000 UTC m=+6.225156712" watchObservedRunningTime="2025-08-19 00:14:26.815441207 +0000 UTC m=+9.086254789" Aug 19 00:14:26.903913 kubelet[2657]: E0819 00:14:26.903753 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:27.908759 kubelet[2657]: E0819 00:14:27.908712 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:28.189570 kubelet[2657]: E0819 00:14:28.189513 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:28.642225 kubelet[2657]: E0819 00:14:28.642088 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:28.912093 kubelet[2657]: E0819 00:14:28.911970 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:29.913789 kubelet[2657]: E0819 00:14:29.913734 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:34.489219 update_engine[1512]: I20250819 00:14:34.489151 1512 update_attempter.cc:509] Updating boot flags... Aug 19 00:14:35.641684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount313477357.mount: Deactivated successfully. Aug 19 00:14:37.118384 containerd[1525]: time="2025-08-19T00:14:37.118318762Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:37.122268 containerd[1525]: time="2025-08-19T00:14:37.122202276Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Aug 19 00:14:37.123660 containerd[1525]: time="2025-08-19T00:14:37.123608617Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:37.125894 containerd[1525]: time="2025-08-19T00:14:37.125757806Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.419735291s" Aug 19 00:14:37.126261 containerd[1525]: time="2025-08-19T00:14:37.125961096Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Aug 19 00:14:37.129835 containerd[1525]: time="2025-08-19T00:14:37.129793627Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Aug 19 00:14:37.136302 containerd[1525]: time="2025-08-19T00:14:37.136250518Z" level=info msg="CreateContainer within sandbox \"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 19 00:14:37.164024 containerd[1525]: time="2025-08-19T00:14:37.163810844Z" level=info msg="Container c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:37.166601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount175932093.mount: Deactivated successfully. Aug 19 00:14:37.173831 containerd[1525]: time="2025-08-19T00:14:37.173771601Z" level=info msg="CreateContainer within sandbox \"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c\"" Aug 19 00:14:37.174634 containerd[1525]: time="2025-08-19T00:14:37.174582759Z" level=info msg="StartContainer for \"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c\"" Aug 19 00:14:37.175604 containerd[1525]: time="2025-08-19T00:14:37.175488719Z" level=info msg="connecting to shim c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c" address="unix:///run/containerd/s/b4bdc2f5d6b7de90f9444845d2f5be8fee44f2edd8cb043093fd22dc6f41fb16" protocol=ttrpc version=3 Aug 19 00:14:37.228328 systemd[1]: Started cri-containerd-c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c.scope - libcontainer container c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c. Aug 19 00:14:37.257527 containerd[1525]: time="2025-08-19T00:14:37.257412364Z" level=info msg="StartContainer for \"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c\" returns successfully" Aug 19 00:14:37.318396 systemd[1]: cri-containerd-c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c.scope: Deactivated successfully. Aug 19 00:14:37.353852 containerd[1525]: time="2025-08-19T00:14:37.353797673Z" level=info msg="received exit event container_id:\"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c\" id:\"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c\" pid:3093 exited_at:{seconds:1755562477 nanos:339423488}" Aug 19 00:14:37.354142 containerd[1525]: time="2025-08-19T00:14:37.354084560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c\" id:\"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c\" pid:3093 exited_at:{seconds:1755562477 nanos:339423488}" Aug 19 00:14:37.986220 kubelet[2657]: E0819 00:14:37.986185 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:37.991587 containerd[1525]: time="2025-08-19T00:14:37.991399260Z" level=info msg="CreateContainer within sandbox \"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 19 00:14:38.024466 containerd[1525]: time="2025-08-19T00:14:38.024401025Z" level=info msg="Container c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:38.029862 containerd[1525]: time="2025-08-19T00:14:38.029805979Z" level=info msg="CreateContainer within sandbox \"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5\"" Aug 19 00:14:38.030454 containerd[1525]: time="2025-08-19T00:14:38.030417356Z" level=info msg="StartContainer for \"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5\"" Aug 19 00:14:38.031974 containerd[1525]: time="2025-08-19T00:14:38.031937196Z" level=info msg="connecting to shim c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5" address="unix:///run/containerd/s/b4bdc2f5d6b7de90f9444845d2f5be8fee44f2edd8cb043093fd22dc6f41fb16" protocol=ttrpc version=3 Aug 19 00:14:38.057323 systemd[1]: Started cri-containerd-c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5.scope - libcontainer container c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5. Aug 19 00:14:38.100506 containerd[1525]: time="2025-08-19T00:14:38.100444176Z" level=info msg="StartContainer for \"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5\" returns successfully" Aug 19 00:14:38.119941 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 19 00:14:38.120201 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:14:38.120497 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:14:38.122031 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 19 00:14:38.123832 systemd[1]: cri-containerd-c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5.scope: Deactivated successfully. Aug 19 00:14:38.131488 containerd[1525]: time="2025-08-19T00:14:38.131308121Z" level=info msg="received exit event container_id:\"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5\" id:\"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5\" pid:3140 exited_at:{seconds:1755562478 nanos:130725676}" Aug 19 00:14:38.131488 containerd[1525]: time="2025-08-19T00:14:38.131455703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5\" id:\"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5\" pid:3140 exited_at:{seconds:1755562478 nanos:130725676}" Aug 19 00:14:38.159140 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 19 00:14:38.164957 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c-rootfs.mount: Deactivated successfully. Aug 19 00:14:38.787348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1913469810.mount: Deactivated successfully. Aug 19 00:14:38.990341 kubelet[2657]: E0819 00:14:38.990247 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:38.993859 containerd[1525]: time="2025-08-19T00:14:38.993800808Z" level=info msg="CreateContainer within sandbox \"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 19 00:14:39.130915 containerd[1525]: time="2025-08-19T00:14:39.130803842Z" level=info msg="Container dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:39.174178 containerd[1525]: time="2025-08-19T00:14:39.174100014Z" level=info msg="CreateContainer within sandbox \"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06\"" Aug 19 00:14:39.175025 containerd[1525]: time="2025-08-19T00:14:39.174994933Z" level=info msg="StartContainer for \"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06\"" Aug 19 00:14:39.176343 containerd[1525]: time="2025-08-19T00:14:39.176319425Z" level=info msg="connecting to shim dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06" address="unix:///run/containerd/s/b4bdc2f5d6b7de90f9444845d2f5be8fee44f2edd8cb043093fd22dc6f41fb16" protocol=ttrpc version=3 Aug 19 00:14:39.200331 systemd[1]: Started cri-containerd-dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06.scope - libcontainer container dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06. Aug 19 00:14:39.259058 systemd[1]: cri-containerd-dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06.scope: Deactivated successfully. Aug 19 00:14:39.260567 containerd[1525]: time="2025-08-19T00:14:39.260536897Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06\" id:\"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06\" pid:3192 exited_at:{seconds:1755562479 nanos:260169389}" Aug 19 00:14:39.267234 containerd[1525]: time="2025-08-19T00:14:39.267186365Z" level=info msg="received exit event container_id:\"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06\" id:\"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06\" pid:3192 exited_at:{seconds:1755562479 nanos:260169389}" Aug 19 00:14:39.269428 containerd[1525]: time="2025-08-19T00:14:39.269374683Z" level=info msg="StartContainer for \"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06\" returns successfully" Aug 19 00:14:39.291513 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06-rootfs.mount: Deactivated successfully. Aug 19 00:14:39.998740 kubelet[2657]: E0819 00:14:39.997711 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:40.001952 containerd[1525]: time="2025-08-19T00:14:40.001912634Z" level=info msg="CreateContainer within sandbox \"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 19 00:14:40.022311 containerd[1525]: time="2025-08-19T00:14:40.022265269Z" level=info msg="Container d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:40.033125 containerd[1525]: time="2025-08-19T00:14:40.033071087Z" level=info msg="CreateContainer within sandbox \"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df\"" Aug 19 00:14:40.034461 containerd[1525]: time="2025-08-19T00:14:40.034407759Z" level=info msg="StartContainer for \"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df\"" Aug 19 00:14:40.037368 containerd[1525]: time="2025-08-19T00:14:40.037336361Z" level=info msg="connecting to shim d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df" address="unix:///run/containerd/s/b4bdc2f5d6b7de90f9444845d2f5be8fee44f2edd8cb043093fd22dc6f41fb16" protocol=ttrpc version=3 Aug 19 00:14:40.060362 systemd[1]: Started cri-containerd-d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df.scope - libcontainer container d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df. Aug 19 00:14:40.083524 systemd[1]: cri-containerd-d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df.scope: Deactivated successfully. Aug 19 00:14:40.084986 containerd[1525]: time="2025-08-19T00:14:40.084941114Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df\" id:\"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df\" pid:3234 exited_at:{seconds:1755562480 nanos:84422435}" Aug 19 00:14:40.086219 containerd[1525]: time="2025-08-19T00:14:40.086187231Z" level=info msg="received exit event container_id:\"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df\" id:\"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df\" pid:3234 exited_at:{seconds:1755562480 nanos:84422435}" Aug 19 00:14:40.097371 containerd[1525]: time="2025-08-19T00:14:40.097318614Z" level=info msg="StartContainer for \"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df\" returns successfully" Aug 19 00:14:40.502373 containerd[1525]: time="2025-08-19T00:14:40.502323653Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:40.503603 containerd[1525]: time="2025-08-19T00:14:40.503303828Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Aug 19 00:14:40.504734 containerd[1525]: time="2025-08-19T00:14:40.504398328Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 19 00:14:40.506403 containerd[1525]: time="2025-08-19T00:14:40.506363000Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.376523873s" Aug 19 00:14:40.506403 containerd[1525]: time="2025-08-19T00:14:40.506402535Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Aug 19 00:14:40.509556 containerd[1525]: time="2025-08-19T00:14:40.509506924Z" level=info msg="CreateContainer within sandbox \"468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Aug 19 00:14:40.519072 containerd[1525]: time="2025-08-19T00:14:40.519023409Z" level=info msg="Container cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:40.525371 containerd[1525]: time="2025-08-19T00:14:40.525284727Z" level=info msg="CreateContainer within sandbox \"468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\"" Aug 19 00:14:40.526259 containerd[1525]: time="2025-08-19T00:14:40.526211242Z" level=info msg="StartContainer for \"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\"" Aug 19 00:14:40.527412 containerd[1525]: time="2025-08-19T00:14:40.527380730Z" level=info msg="connecting to shim cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04" address="unix:///run/containerd/s/0c1754856051f7cdc3edde0438d7dd0aa8b7c1abd9fcedde5ad5d8e73cc7614e" protocol=ttrpc version=3 Aug 19 00:14:40.555366 systemd[1]: Started cri-containerd-cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04.scope - libcontainer container cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04. Aug 19 00:14:40.635206 containerd[1525]: time="2025-08-19T00:14:40.635148846Z" level=info msg="StartContainer for \"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\" returns successfully" Aug 19 00:14:41.005237 kubelet[2657]: E0819 00:14:41.005199 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:41.009629 containerd[1525]: time="2025-08-19T00:14:41.009321455Z" level=info msg="CreateContainer within sandbox \"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 19 00:14:41.015597 kubelet[2657]: E0819 00:14:41.015541 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:41.028048 containerd[1525]: time="2025-08-19T00:14:41.027992407Z" level=info msg="Container f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:41.037576 containerd[1525]: time="2025-08-19T00:14:41.037506768Z" level=info msg="CreateContainer within sandbox \"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\"" Aug 19 00:14:41.039384 containerd[1525]: time="2025-08-19T00:14:41.039345921Z" level=info msg="StartContainer for \"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\"" Aug 19 00:14:41.041353 containerd[1525]: time="2025-08-19T00:14:41.041152422Z" level=info msg="connecting to shim f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3" address="unix:///run/containerd/s/b4bdc2f5d6b7de90f9444845d2f5be8fee44f2edd8cb043093fd22dc6f41fb16" protocol=ttrpc version=3 Aug 19 00:14:41.041918 kubelet[2657]: I0819 00:14:41.041813 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-w69pf" podStartSLOduration=1.576428125 podStartE2EDuration="18.041727512s" podCreationTimestamp="2025-08-19 00:14:23 +0000 UTC" firstStartedPulling="2025-08-19 00:14:24.041857277 +0000 UTC m=+6.312670859" lastFinishedPulling="2025-08-19 00:14:40.507156664 +0000 UTC m=+22.777970246" observedRunningTime="2025-08-19 00:14:41.040994324 +0000 UTC m=+23.311807906" watchObservedRunningTime="2025-08-19 00:14:41.041727512 +0000 UTC m=+23.312541174" Aug 19 00:14:41.073356 systemd[1]: Started cri-containerd-f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3.scope - libcontainer container f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3. Aug 19 00:14:41.135162 containerd[1525]: time="2025-08-19T00:14:41.135115444Z" level=info msg="StartContainer for \"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\" returns successfully" Aug 19 00:14:41.248537 containerd[1525]: time="2025-08-19T00:14:41.248488007Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\" id:\"29f221b6d199f9c73e06965398df53a490fdcbbb088e3af17d6a4512ba4ff02a\" pid:3344 exited_at:{seconds:1755562481 nanos:246696071}" Aug 19 00:14:41.322236 kubelet[2657]: I0819 00:14:41.320242 2657 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 19 00:14:41.397697 systemd[1]: Created slice kubepods-burstable-pod7e29fbe0_d0a1_4029_af3e_a3ac4f9c1f28.slice - libcontainer container kubepods-burstable-pod7e29fbe0_d0a1_4029_af3e_a3ac4f9c1f28.slice. Aug 19 00:14:41.412950 systemd[1]: Created slice kubepods-burstable-pod7482be49_ddcd_4715_aaea_2673024ced9b.slice - libcontainer container kubepods-burstable-pod7482be49_ddcd_4715_aaea_2673024ced9b.slice. Aug 19 00:14:41.507451 kubelet[2657]: I0819 00:14:41.507391 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7482be49-ddcd-4715-aaea-2673024ced9b-config-volume\") pod \"coredns-668d6bf9bc-65hmv\" (UID: \"7482be49-ddcd-4715-aaea-2673024ced9b\") " pod="kube-system/coredns-668d6bf9bc-65hmv" Aug 19 00:14:41.507451 kubelet[2657]: I0819 00:14:41.507446 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpbsd\" (UniqueName: \"kubernetes.io/projected/7482be49-ddcd-4715-aaea-2673024ced9b-kube-api-access-lpbsd\") pod \"coredns-668d6bf9bc-65hmv\" (UID: \"7482be49-ddcd-4715-aaea-2673024ced9b\") " pod="kube-system/coredns-668d6bf9bc-65hmv" Aug 19 00:14:41.507628 kubelet[2657]: I0819 00:14:41.507476 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7e29fbe0-d0a1-4029-af3e-a3ac4f9c1f28-config-volume\") pod \"coredns-668d6bf9bc-fm4cl\" (UID: \"7e29fbe0-d0a1-4029-af3e-a3ac4f9c1f28\") " pod="kube-system/coredns-668d6bf9bc-fm4cl" Aug 19 00:14:41.507628 kubelet[2657]: I0819 00:14:41.507497 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nw52\" (UniqueName: \"kubernetes.io/projected/7e29fbe0-d0a1-4029-af3e-a3ac4f9c1f28-kube-api-access-2nw52\") pod \"coredns-668d6bf9bc-fm4cl\" (UID: \"7e29fbe0-d0a1-4029-af3e-a3ac4f9c1f28\") " pod="kube-system/coredns-668d6bf9bc-fm4cl" Aug 19 00:14:41.703325 kubelet[2657]: E0819 00:14:41.703273 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:41.704932 containerd[1525]: time="2025-08-19T00:14:41.704881804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fm4cl,Uid:7e29fbe0-d0a1-4029-af3e-a3ac4f9c1f28,Namespace:kube-system,Attempt:0,}" Aug 19 00:14:41.716187 kubelet[2657]: E0819 00:14:41.715570 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:41.716316 containerd[1525]: time="2025-08-19T00:14:41.716130040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-65hmv,Uid:7482be49-ddcd-4715-aaea-2673024ced9b,Namespace:kube-system,Attempt:0,}" Aug 19 00:14:42.023281 kubelet[2657]: E0819 00:14:42.022827 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:42.023281 kubelet[2657]: E0819 00:14:42.022921 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:43.025047 kubelet[2657]: E0819 00:14:43.024983 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:44.029000 kubelet[2657]: E0819 00:14:44.028944 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:44.411708 systemd-networkd[1427]: cilium_host: Link UP Aug 19 00:14:44.412132 systemd-networkd[1427]: cilium_net: Link UP Aug 19 00:14:44.412311 systemd-networkd[1427]: cilium_host: Gained carrier Aug 19 00:14:44.412445 systemd-networkd[1427]: cilium_net: Gained carrier Aug 19 00:14:44.513090 systemd-networkd[1427]: cilium_vxlan: Link UP Aug 19 00:14:44.513103 systemd-networkd[1427]: cilium_vxlan: Gained carrier Aug 19 00:14:44.900156 kernel: NET: Registered PF_ALG protocol family Aug 19 00:14:45.134325 systemd-networkd[1427]: cilium_host: Gained IPv6LL Aug 19 00:14:45.390315 systemd-networkd[1427]: cilium_net: Gained IPv6LL Aug 19 00:14:45.586896 systemd-networkd[1427]: lxc_health: Link UP Aug 19 00:14:45.587310 systemd-networkd[1427]: lxc_health: Gained carrier Aug 19 00:14:45.635086 kubelet[2657]: E0819 00:14:45.635027 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:45.671212 kubelet[2657]: I0819 00:14:45.670789 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9hqd9" podStartSLOduration=9.246546357 podStartE2EDuration="22.670768379s" podCreationTimestamp="2025-08-19 00:14:23 +0000 UTC" firstStartedPulling="2025-08-19 00:14:23.705374959 +0000 UTC m=+5.976188541" lastFinishedPulling="2025-08-19 00:14:37.129596981 +0000 UTC m=+19.400410563" observedRunningTime="2025-08-19 00:14:42.043554085 +0000 UTC m=+24.314367667" watchObservedRunningTime="2025-08-19 00:14:45.670768379 +0000 UTC m=+27.941581961" Aug 19 00:14:45.904608 systemd-networkd[1427]: cilium_vxlan: Gained IPv6LL Aug 19 00:14:45.917675 systemd-networkd[1427]: lxc6ad723f800c2: Link UP Aug 19 00:14:45.920152 kernel: eth0: renamed from tmpf8287 Aug 19 00:14:45.922727 systemd-networkd[1427]: lxc0b30b33dd9fa: Link UP Aug 19 00:14:45.934201 kernel: eth0: renamed from tmp78861 Aug 19 00:14:45.938061 systemd-networkd[1427]: lxc0b30b33dd9fa: Gained carrier Aug 19 00:14:45.941844 systemd-networkd[1427]: lxc6ad723f800c2: Gained carrier Aug 19 00:14:46.033120 kubelet[2657]: E0819 00:14:46.033060 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:47.034798 kubelet[2657]: E0819 00:14:47.034750 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:47.118320 systemd-networkd[1427]: lxc_health: Gained IPv6LL Aug 19 00:14:47.566254 systemd-networkd[1427]: lxc0b30b33dd9fa: Gained IPv6LL Aug 19 00:14:47.615022 systemd[1]: Started sshd@7-10.0.0.59:22-10.0.0.1:41970.service - OpenSSH per-connection server daemon (10.0.0.1:41970). Aug 19 00:14:47.681788 sshd[3826]: Accepted publickey for core from 10.0.0.1 port 41970 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:14:47.683779 sshd-session[3826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:47.688733 systemd-logind[1510]: New session 8 of user core. Aug 19 00:14:47.698307 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 19 00:14:47.822258 systemd-networkd[1427]: lxc6ad723f800c2: Gained IPv6LL Aug 19 00:14:47.874664 sshd[3829]: Connection closed by 10.0.0.1 port 41970 Aug 19 00:14:47.874138 sshd-session[3826]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:47.882790 systemd[1]: sshd@7-10.0.0.59:22-10.0.0.1:41970.service: Deactivated successfully. Aug 19 00:14:47.885085 systemd[1]: session-8.scope: Deactivated successfully. Aug 19 00:14:47.886879 systemd-logind[1510]: Session 8 logged out. Waiting for processes to exit. Aug 19 00:14:47.888088 systemd-logind[1510]: Removed session 8. Aug 19 00:14:49.973715 containerd[1525]: time="2025-08-19T00:14:49.973592693Z" level=info msg="connecting to shim f82874ff212b9040bdaaa5a7784b0ed1fa635ca59000aa11b1109353f0b0bb78" address="unix:///run/containerd/s/8b040ad167c14b6aed3aeac29ebf2981196d30ceca532f431ba2706a350d2561" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:14:49.996875 containerd[1525]: time="2025-08-19T00:14:49.996827551Z" level=info msg="connecting to shim 78861fe4fa3150c7f18d57131f4f1b2391e1dce0226425a1b83940b09426ffa1" address="unix:///run/containerd/s/abbef1502e654f1d5eae860f411897c9b6d253df02ebc58f72fda5b251fcbbfa" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:14:50.015336 systemd[1]: Started cri-containerd-f82874ff212b9040bdaaa5a7784b0ed1fa635ca59000aa11b1109353f0b0bb78.scope - libcontainer container f82874ff212b9040bdaaa5a7784b0ed1fa635ca59000aa11b1109353f0b0bb78. Aug 19 00:14:50.018705 systemd[1]: Started cri-containerd-78861fe4fa3150c7f18d57131f4f1b2391e1dce0226425a1b83940b09426ffa1.scope - libcontainer container 78861fe4fa3150c7f18d57131f4f1b2391e1dce0226425a1b83940b09426ffa1. Aug 19 00:14:50.031554 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:14:50.032977 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Aug 19 00:14:50.063418 containerd[1525]: time="2025-08-19T00:14:50.063378982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-65hmv,Uid:7482be49-ddcd-4715-aaea-2673024ced9b,Namespace:kube-system,Attempt:0,} returns sandbox id \"78861fe4fa3150c7f18d57131f4f1b2391e1dce0226425a1b83940b09426ffa1\"" Aug 19 00:14:50.064617 kubelet[2657]: E0819 00:14:50.064590 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:50.070753 containerd[1525]: time="2025-08-19T00:14:50.070201707Z" level=info msg="CreateContainer within sandbox \"78861fe4fa3150c7f18d57131f4f1b2391e1dce0226425a1b83940b09426ffa1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 00:14:50.074920 containerd[1525]: time="2025-08-19T00:14:50.074876890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-fm4cl,Uid:7e29fbe0-d0a1-4029-af3e-a3ac4f9c1f28,Namespace:kube-system,Attempt:0,} returns sandbox id \"f82874ff212b9040bdaaa5a7784b0ed1fa635ca59000aa11b1109353f0b0bb78\"" Aug 19 00:14:50.076358 kubelet[2657]: E0819 00:14:50.076332 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:50.080894 containerd[1525]: time="2025-08-19T00:14:50.080861403Z" level=info msg="CreateContainer within sandbox \"f82874ff212b9040bdaaa5a7784b0ed1fa635ca59000aa11b1109353f0b0bb78\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 19 00:14:50.086778 containerd[1525]: time="2025-08-19T00:14:50.086596293Z" level=info msg="Container 610cba40d76db11e334afe11d6e1c152fb2f0f067e81ca396614290a539fdd42: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:50.092945 containerd[1525]: time="2025-08-19T00:14:50.092898567Z" level=info msg="Container cf471588913d7975731afbba998763d7373413d6db19100d736db494ec3ac53e: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:14:50.097560 containerd[1525]: time="2025-08-19T00:14:50.097509373Z" level=info msg="CreateContainer within sandbox \"78861fe4fa3150c7f18d57131f4f1b2391e1dce0226425a1b83940b09426ffa1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"610cba40d76db11e334afe11d6e1c152fb2f0f067e81ca396614290a539fdd42\"" Aug 19 00:14:50.098177 containerd[1525]: time="2025-08-19T00:14:50.098150415Z" level=info msg="StartContainer for \"610cba40d76db11e334afe11d6e1c152fb2f0f067e81ca396614290a539fdd42\"" Aug 19 00:14:50.100447 containerd[1525]: time="2025-08-19T00:14:50.100404945Z" level=info msg="CreateContainer within sandbox \"f82874ff212b9040bdaaa5a7784b0ed1fa635ca59000aa11b1109353f0b0bb78\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cf471588913d7975731afbba998763d7373413d6db19100d736db494ec3ac53e\"" Aug 19 00:14:50.100750 containerd[1525]: time="2025-08-19T00:14:50.100729468Z" level=info msg="connecting to shim 610cba40d76db11e334afe11d6e1c152fb2f0f067e81ca396614290a539fdd42" address="unix:///run/containerd/s/abbef1502e654f1d5eae860f411897c9b6d253df02ebc58f72fda5b251fcbbfa" protocol=ttrpc version=3 Aug 19 00:14:50.101535 containerd[1525]: time="2025-08-19T00:14:50.101510185Z" level=info msg="StartContainer for \"cf471588913d7975731afbba998763d7373413d6db19100d736db494ec3ac53e\"" Aug 19 00:14:50.102449 containerd[1525]: time="2025-08-19T00:14:50.102341635Z" level=info msg="connecting to shim cf471588913d7975731afbba998763d7373413d6db19100d736db494ec3ac53e" address="unix:///run/containerd/s/8b040ad167c14b6aed3aeac29ebf2981196d30ceca532f431ba2706a350d2561" protocol=ttrpc version=3 Aug 19 00:14:50.123355 systemd[1]: Started cri-containerd-cf471588913d7975731afbba998763d7373413d6db19100d736db494ec3ac53e.scope - libcontainer container cf471588913d7975731afbba998763d7373413d6db19100d736db494ec3ac53e. Aug 19 00:14:50.126585 systemd[1]: Started cri-containerd-610cba40d76db11e334afe11d6e1c152fb2f0f067e81ca396614290a539fdd42.scope - libcontainer container 610cba40d76db11e334afe11d6e1c152fb2f0f067e81ca396614290a539fdd42. Aug 19 00:14:50.161045 containerd[1525]: time="2025-08-19T00:14:50.161005551Z" level=info msg="StartContainer for \"610cba40d76db11e334afe11d6e1c152fb2f0f067e81ca396614290a539fdd42\" returns successfully" Aug 19 00:14:50.205868 containerd[1525]: time="2025-08-19T00:14:50.205769951Z" level=info msg="StartContainer for \"cf471588913d7975731afbba998763d7373413d6db19100d736db494ec3ac53e\" returns successfully" Aug 19 00:14:50.956039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount956132648.mount: Deactivated successfully. Aug 19 00:14:51.049790 kubelet[2657]: E0819 00:14:51.049668 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:51.053924 kubelet[2657]: E0819 00:14:51.053883 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:51.063497 kubelet[2657]: I0819 00:14:51.063420 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-fm4cl" podStartSLOduration=28.063403037 podStartE2EDuration="28.063403037s" podCreationTimestamp="2025-08-19 00:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:14:51.062696705 +0000 UTC m=+33.333510287" watchObservedRunningTime="2025-08-19 00:14:51.063403037 +0000 UTC m=+33.334216619" Aug 19 00:14:52.055739 kubelet[2657]: E0819 00:14:52.055621 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:52.055739 kubelet[2657]: E0819 00:14:52.055665 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:52.895903 systemd[1]: Started sshd@8-10.0.0.59:22-10.0.0.1:58892.service - OpenSSH per-connection server daemon (10.0.0.1:58892). Aug 19 00:14:52.955898 sshd[4024]: Accepted publickey for core from 10.0.0.1 port 58892 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:14:52.959769 sshd-session[4024]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:52.966406 systemd-logind[1510]: New session 9 of user core. Aug 19 00:14:52.973342 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 19 00:14:53.062330 kubelet[2657]: E0819 00:14:53.062247 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:53.062330 kubelet[2657]: E0819 00:14:53.062288 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:14:53.119926 sshd[4027]: Connection closed by 10.0.0.1 port 58892 Aug 19 00:14:53.120534 sshd-session[4024]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:53.124819 systemd-logind[1510]: Session 9 logged out. Waiting for processes to exit. Aug 19 00:14:53.125396 systemd[1]: sshd@8-10.0.0.59:22-10.0.0.1:58892.service: Deactivated successfully. Aug 19 00:14:53.128872 systemd[1]: session-9.scope: Deactivated successfully. Aug 19 00:14:53.130920 systemd-logind[1510]: Removed session 9. Aug 19 00:14:58.133636 systemd[1]: Started sshd@9-10.0.0.59:22-10.0.0.1:58894.service - OpenSSH per-connection server daemon (10.0.0.1:58894). Aug 19 00:14:58.201136 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 58894 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:14:58.202601 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:14:58.206953 systemd-logind[1510]: New session 10 of user core. Aug 19 00:14:58.221324 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 19 00:14:58.361403 sshd[4048]: Connection closed by 10.0.0.1 port 58894 Aug 19 00:14:58.362325 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Aug 19 00:14:58.366013 systemd[1]: sshd@9-10.0.0.59:22-10.0.0.1:58894.service: Deactivated successfully. Aug 19 00:14:58.370039 systemd[1]: session-10.scope: Deactivated successfully. Aug 19 00:14:58.371182 systemd-logind[1510]: Session 10 logged out. Waiting for processes to exit. Aug 19 00:14:58.373489 systemd-logind[1510]: Removed session 10. Aug 19 00:15:03.388270 systemd[1]: Started sshd@10-10.0.0.59:22-10.0.0.1:55056.service - OpenSSH per-connection server daemon (10.0.0.1:55056). Aug 19 00:15:03.478986 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 55056 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:03.480268 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:03.485190 systemd-logind[1510]: New session 11 of user core. Aug 19 00:15:03.502364 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 19 00:15:03.644429 sshd[4066]: Connection closed by 10.0.0.1 port 55056 Aug 19 00:15:03.644544 sshd-session[4063]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:03.656851 systemd[1]: sshd@10-10.0.0.59:22-10.0.0.1:55056.service: Deactivated successfully. Aug 19 00:15:03.662156 systemd[1]: session-11.scope: Deactivated successfully. Aug 19 00:15:03.664887 systemd-logind[1510]: Session 11 logged out. Waiting for processes to exit. Aug 19 00:15:03.668696 systemd[1]: Started sshd@11-10.0.0.59:22-10.0.0.1:55064.service - OpenSSH per-connection server daemon (10.0.0.1:55064). Aug 19 00:15:03.669680 systemd-logind[1510]: Removed session 11. Aug 19 00:15:03.743246 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 55064 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:03.744584 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:03.749253 systemd-logind[1510]: New session 12 of user core. Aug 19 00:15:03.765331 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 19 00:15:03.940251 sshd[4084]: Connection closed by 10.0.0.1 port 55064 Aug 19 00:15:03.941291 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:03.950976 systemd[1]: sshd@11-10.0.0.59:22-10.0.0.1:55064.service: Deactivated successfully. Aug 19 00:15:03.955178 systemd[1]: session-12.scope: Deactivated successfully. Aug 19 00:15:03.956031 systemd-logind[1510]: Session 12 logged out. Waiting for processes to exit. Aug 19 00:15:03.959897 systemd[1]: Started sshd@12-10.0.0.59:22-10.0.0.1:55066.service - OpenSSH per-connection server daemon (10.0.0.1:55066). Aug 19 00:15:03.966136 systemd-logind[1510]: Removed session 12. Aug 19 00:15:04.028312 sshd[4096]: Accepted publickey for core from 10.0.0.1 port 55066 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:04.030002 sshd-session[4096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:04.034870 systemd-logind[1510]: New session 13 of user core. Aug 19 00:15:04.054338 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 19 00:15:04.172681 sshd[4099]: Connection closed by 10.0.0.1 port 55066 Aug 19 00:15:04.171639 sshd-session[4096]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:04.175201 systemd[1]: sshd@12-10.0.0.59:22-10.0.0.1:55066.service: Deactivated successfully. Aug 19 00:15:04.176989 systemd[1]: session-13.scope: Deactivated successfully. Aug 19 00:15:04.179290 systemd-logind[1510]: Session 13 logged out. Waiting for processes to exit. Aug 19 00:15:04.180391 systemd-logind[1510]: Removed session 13. Aug 19 00:15:09.188864 systemd[1]: Started sshd@13-10.0.0.59:22-10.0.0.1:55082.service - OpenSSH per-connection server daemon (10.0.0.1:55082). Aug 19 00:15:09.255185 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 55082 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:09.256480 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:09.262494 systemd-logind[1510]: New session 14 of user core. Aug 19 00:15:09.269365 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 19 00:15:09.408159 sshd[4118]: Connection closed by 10.0.0.1 port 55082 Aug 19 00:15:09.408282 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:09.414207 systemd[1]: sshd@13-10.0.0.59:22-10.0.0.1:55082.service: Deactivated successfully. Aug 19 00:15:09.416312 systemd[1]: session-14.scope: Deactivated successfully. Aug 19 00:15:09.417964 systemd-logind[1510]: Session 14 logged out. Waiting for processes to exit. Aug 19 00:15:09.421955 systemd-logind[1510]: Removed session 14. Aug 19 00:15:14.426141 systemd[1]: Started sshd@14-10.0.0.59:22-10.0.0.1:54224.service - OpenSSH per-connection server daemon (10.0.0.1:54224). Aug 19 00:15:14.496203 sshd[4131]: Accepted publickey for core from 10.0.0.1 port 54224 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:14.497938 sshd-session[4131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:14.503165 systemd-logind[1510]: New session 15 of user core. Aug 19 00:15:14.515396 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 19 00:15:14.657137 sshd[4134]: Connection closed by 10.0.0.1 port 54224 Aug 19 00:15:14.658623 sshd-session[4131]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:14.672786 systemd[1]: sshd@14-10.0.0.59:22-10.0.0.1:54224.service: Deactivated successfully. Aug 19 00:15:14.675719 systemd[1]: session-15.scope: Deactivated successfully. Aug 19 00:15:14.683376 systemd-logind[1510]: Session 15 logged out. Waiting for processes to exit. Aug 19 00:15:14.689042 systemd[1]: Started sshd@15-10.0.0.59:22-10.0.0.1:54230.service - OpenSSH per-connection server daemon (10.0.0.1:54230). Aug 19 00:15:14.690185 systemd-logind[1510]: Removed session 15. Aug 19 00:15:14.761753 sshd[4147]: Accepted publickey for core from 10.0.0.1 port 54230 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:14.763413 sshd-session[4147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:14.768251 systemd-logind[1510]: New session 16 of user core. Aug 19 00:15:14.782372 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 19 00:15:15.087151 sshd[4150]: Connection closed by 10.0.0.1 port 54230 Aug 19 00:15:15.087733 sshd-session[4147]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:15.098443 systemd[1]: sshd@15-10.0.0.59:22-10.0.0.1:54230.service: Deactivated successfully. Aug 19 00:15:15.101958 systemd[1]: session-16.scope: Deactivated successfully. Aug 19 00:15:15.103313 systemd-logind[1510]: Session 16 logged out. Waiting for processes to exit. Aug 19 00:15:15.110418 systemd[1]: Started sshd@16-10.0.0.59:22-10.0.0.1:54244.service - OpenSSH per-connection server daemon (10.0.0.1:54244). Aug 19 00:15:15.111026 systemd-logind[1510]: Removed session 16. Aug 19 00:15:15.189655 sshd[4163]: Accepted publickey for core from 10.0.0.1 port 54244 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:15.191318 sshd-session[4163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:15.197154 systemd-logind[1510]: New session 17 of user core. Aug 19 00:15:15.209327 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 19 00:15:15.871994 sshd[4166]: Connection closed by 10.0.0.1 port 54244 Aug 19 00:15:15.873305 sshd-session[4163]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:15.886377 systemd[1]: sshd@16-10.0.0.59:22-10.0.0.1:54244.service: Deactivated successfully. Aug 19 00:15:15.890096 systemd[1]: session-17.scope: Deactivated successfully. Aug 19 00:15:15.891532 systemd-logind[1510]: Session 17 logged out. Waiting for processes to exit. Aug 19 00:15:15.899800 systemd[1]: Started sshd@17-10.0.0.59:22-10.0.0.1:54258.service - OpenSSH per-connection server daemon (10.0.0.1:54258). Aug 19 00:15:15.900771 systemd-logind[1510]: Removed session 17. Aug 19 00:15:15.963807 sshd[4184]: Accepted publickey for core from 10.0.0.1 port 54258 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:15.965173 sshd-session[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:15.970197 systemd-logind[1510]: New session 18 of user core. Aug 19 00:15:15.982384 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 19 00:15:16.229241 sshd[4187]: Connection closed by 10.0.0.1 port 54258 Aug 19 00:15:16.230368 sshd-session[4184]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:16.240414 systemd[1]: sshd@17-10.0.0.59:22-10.0.0.1:54258.service: Deactivated successfully. Aug 19 00:15:16.242851 systemd[1]: session-18.scope: Deactivated successfully. Aug 19 00:15:16.243820 systemd-logind[1510]: Session 18 logged out. Waiting for processes to exit. Aug 19 00:15:16.251496 systemd[1]: Started sshd@18-10.0.0.59:22-10.0.0.1:54268.service - OpenSSH per-connection server daemon (10.0.0.1:54268). Aug 19 00:15:16.254336 systemd-logind[1510]: Removed session 18. Aug 19 00:15:16.318945 sshd[4198]: Accepted publickey for core from 10.0.0.1 port 54268 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:16.320816 sshd-session[4198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:16.325572 systemd-logind[1510]: New session 19 of user core. Aug 19 00:15:16.334359 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 19 00:15:16.467075 sshd[4201]: Connection closed by 10.0.0.1 port 54268 Aug 19 00:15:16.467438 sshd-session[4198]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:16.471378 systemd[1]: sshd@18-10.0.0.59:22-10.0.0.1:54268.service: Deactivated successfully. Aug 19 00:15:16.475881 systemd[1]: session-19.scope: Deactivated successfully. Aug 19 00:15:16.479196 systemd-logind[1510]: Session 19 logged out. Waiting for processes to exit. Aug 19 00:15:16.483235 systemd-logind[1510]: Removed session 19. Aug 19 00:15:21.485233 systemd[1]: Started sshd@19-10.0.0.59:22-10.0.0.1:54272.service - OpenSSH per-connection server daemon (10.0.0.1:54272). Aug 19 00:15:21.546870 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 54272 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:21.548254 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:21.553572 systemd-logind[1510]: New session 20 of user core. Aug 19 00:15:21.561303 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 19 00:15:21.684627 sshd[4224]: Connection closed by 10.0.0.1 port 54272 Aug 19 00:15:21.685192 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:21.688309 systemd[1]: sshd@19-10.0.0.59:22-10.0.0.1:54272.service: Deactivated successfully. Aug 19 00:15:21.689891 systemd[1]: session-20.scope: Deactivated successfully. Aug 19 00:15:21.692395 systemd-logind[1510]: Session 20 logged out. Waiting for processes to exit. Aug 19 00:15:21.693520 systemd-logind[1510]: Removed session 20. Aug 19 00:15:26.696805 systemd[1]: Started sshd@20-10.0.0.59:22-10.0.0.1:39370.service - OpenSSH per-connection server daemon (10.0.0.1:39370). Aug 19 00:15:26.779398 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 39370 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:26.780772 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:26.786534 systemd-logind[1510]: New session 21 of user core. Aug 19 00:15:26.806393 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 19 00:15:26.935614 sshd[4242]: Connection closed by 10.0.0.1 port 39370 Aug 19 00:15:26.935993 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:26.939769 systemd-logind[1510]: Session 21 logged out. Waiting for processes to exit. Aug 19 00:15:26.939927 systemd[1]: sshd@20-10.0.0.59:22-10.0.0.1:39370.service: Deactivated successfully. Aug 19 00:15:26.941597 systemd[1]: session-21.scope: Deactivated successfully. Aug 19 00:15:26.942992 systemd-logind[1510]: Removed session 21. Aug 19 00:15:31.947407 systemd[1]: Started sshd@21-10.0.0.59:22-10.0.0.1:39372.service - OpenSSH per-connection server daemon (10.0.0.1:39372). Aug 19 00:15:32.013253 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 39372 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:32.014872 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:32.019434 systemd-logind[1510]: New session 22 of user core. Aug 19 00:15:32.033399 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 19 00:15:32.152550 sshd[4258]: Connection closed by 10.0.0.1 port 39372 Aug 19 00:15:32.153088 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:32.164847 systemd[1]: sshd@21-10.0.0.59:22-10.0.0.1:39372.service: Deactivated successfully. Aug 19 00:15:32.167732 systemd[1]: session-22.scope: Deactivated successfully. Aug 19 00:15:32.169862 systemd-logind[1510]: Session 22 logged out. Waiting for processes to exit. Aug 19 00:15:32.174032 systemd[1]: Started sshd@22-10.0.0.59:22-10.0.0.1:39384.service - OpenSSH per-connection server daemon (10.0.0.1:39384). Aug 19 00:15:32.175190 systemd-logind[1510]: Removed session 22. Aug 19 00:15:32.244845 sshd[4271]: Accepted publickey for core from 10.0.0.1 port 39384 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:32.246407 sshd-session[4271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:32.251144 systemd-logind[1510]: New session 23 of user core. Aug 19 00:15:32.260335 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 19 00:15:33.588183 kubelet[2657]: I0819 00:15:33.588116 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-65hmv" podStartSLOduration=70.588059947 podStartE2EDuration="1m10.588059947s" podCreationTimestamp="2025-08-19 00:14:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:14:51.094221514 +0000 UTC m=+33.365035136" watchObservedRunningTime="2025-08-19 00:15:33.588059947 +0000 UTC m=+75.858873529" Aug 19 00:15:33.603138 containerd[1525]: time="2025-08-19T00:15:33.602436247Z" level=info msg="StopContainer for \"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\" with timeout 30 (s)" Aug 19 00:15:33.604496 containerd[1525]: time="2025-08-19T00:15:33.604449951Z" level=info msg="Stop container \"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\" with signal terminated" Aug 19 00:15:33.625715 systemd[1]: cri-containerd-cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04.scope: Deactivated successfully. Aug 19 00:15:33.626480 containerd[1525]: time="2025-08-19T00:15:33.625810839Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\" id:\"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\" pid:3280 exited_at:{seconds:1755562533 nanos:625463349}" Aug 19 00:15:33.626480 containerd[1525]: time="2025-08-19T00:15:33.625877273Z" level=info msg="received exit event container_id:\"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\" id:\"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\" pid:3280 exited_at:{seconds:1755562533 nanos:625463349}" Aug 19 00:15:33.639944 containerd[1525]: time="2025-08-19T00:15:33.639858808Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 19 00:15:33.640662 containerd[1525]: time="2025-08-19T00:15:33.640630940Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\" id:\"3ccd7964608b4192049fe37e7aceb66de89c5ef43ed24d260a9c355bede1e521\" pid:4303 exited_at:{seconds:1755562533 nanos:640350925}" Aug 19 00:15:33.643183 containerd[1525]: time="2025-08-19T00:15:33.643155879Z" level=info msg="StopContainer for \"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\" with timeout 2 (s)" Aug 19 00:15:33.643481 containerd[1525]: time="2025-08-19T00:15:33.643457772Z" level=info msg="Stop container \"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\" with signal terminated" Aug 19 00:15:33.653827 systemd-networkd[1427]: lxc_health: Link DOWN Aug 19 00:15:33.653834 systemd-networkd[1427]: lxc_health: Lost carrier Aug 19 00:15:33.661647 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04-rootfs.mount: Deactivated successfully. Aug 19 00:15:33.671546 systemd[1]: cri-containerd-f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3.scope: Deactivated successfully. Aug 19 00:15:33.671849 systemd[1]: cri-containerd-f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3.scope: Consumed 7.327s CPU time, 124.5M memory peak, 148K read from disk, 12.9M written to disk. Aug 19 00:15:33.674183 containerd[1525]: time="2025-08-19T00:15:33.674143163Z" level=info msg="received exit event container_id:\"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\" id:\"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\" pid:3313 exited_at:{seconds:1755562533 nanos:673902264}" Aug 19 00:15:33.674646 containerd[1525]: time="2025-08-19T00:15:33.674623361Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\" id:\"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\" pid:3313 exited_at:{seconds:1755562533 nanos:673902264}" Aug 19 00:15:33.675935 containerd[1525]: time="2025-08-19T00:15:33.675912048Z" level=info msg="StopContainer for \"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\" returns successfully" Aug 19 00:15:33.677017 containerd[1525]: time="2025-08-19T00:15:33.676993073Z" level=info msg="StopPodSandbox for \"468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7\"" Aug 19 00:15:33.685936 containerd[1525]: time="2025-08-19T00:15:33.684890421Z" level=info msg="Container to stop \"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 00:15:33.691690 systemd[1]: cri-containerd-468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7.scope: Deactivated successfully. Aug 19 00:15:33.694976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3-rootfs.mount: Deactivated successfully. Aug 19 00:15:33.698865 containerd[1525]: time="2025-08-19T00:15:33.698813521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7\" id:\"468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7\" pid:2898 exit_status:137 exited_at:{seconds:1755562533 nanos:694435025}" Aug 19 00:15:33.703962 containerd[1525]: time="2025-08-19T00:15:33.703843280Z" level=info msg="StopContainer for \"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\" returns successfully" Aug 19 00:15:33.705814 containerd[1525]: time="2025-08-19T00:15:33.705779790Z" level=info msg="StopPodSandbox for \"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\"" Aug 19 00:15:33.705878 containerd[1525]: time="2025-08-19T00:15:33.705852304Z" level=info msg="Container to stop \"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 00:15:33.705878 containerd[1525]: time="2025-08-19T00:15:33.705865823Z" level=info msg="Container to stop \"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 00:15:33.705878 containerd[1525]: time="2025-08-19T00:15:33.705876222Z" level=info msg="Container to stop \"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 00:15:33.705939 containerd[1525]: time="2025-08-19T00:15:33.705884301Z" level=info msg="Container to stop \"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 00:15:33.705939 containerd[1525]: time="2025-08-19T00:15:33.705892700Z" level=info msg="Container to stop \"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 19 00:15:33.711580 systemd[1]: cri-containerd-3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d.scope: Deactivated successfully. Aug 19 00:15:33.727208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7-rootfs.mount: Deactivated successfully. Aug 19 00:15:33.730943 containerd[1525]: time="2025-08-19T00:15:33.730909428Z" level=info msg="shim disconnected" id=468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7 namespace=k8s.io Aug 19 00:15:33.731312 containerd[1525]: time="2025-08-19T00:15:33.730941225Z" level=warning msg="cleaning up after shim disconnected" id=468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7 namespace=k8s.io Aug 19 00:15:33.731312 containerd[1525]: time="2025-08-19T00:15:33.731215081Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 19 00:15:33.735817 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d-rootfs.mount: Deactivated successfully. Aug 19 00:15:33.737858 containerd[1525]: time="2025-08-19T00:15:33.737822182Z" level=info msg="shim disconnected" id=3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d namespace=k8s.io Aug 19 00:15:33.737950 containerd[1525]: time="2025-08-19T00:15:33.737850020Z" level=warning msg="cleaning up after shim disconnected" id=3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d namespace=k8s.io Aug 19 00:15:33.737950 containerd[1525]: time="2025-08-19T00:15:33.737880177Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 19 00:15:33.746056 containerd[1525]: time="2025-08-19T00:15:33.745992146Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\" id:\"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\" pid:2808 exit_status:137 exited_at:{seconds:1755562533 nanos:711684913}" Aug 19 00:15:33.747570 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7-shm.mount: Deactivated successfully. Aug 19 00:15:33.749716 containerd[1525]: time="2025-08-19T00:15:33.746061980Z" level=info msg="received exit event sandbox_id:\"468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7\" exit_status:137 exited_at:{seconds:1755562533 nanos:694435025}" Aug 19 00:15:33.749782 containerd[1525]: time="2025-08-19T00:15:33.749720739Z" level=info msg="received exit event sandbox_id:\"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\" exit_status:137 exited_at:{seconds:1755562533 nanos:711684913}" Aug 19 00:15:33.750355 containerd[1525]: time="2025-08-19T00:15:33.750304128Z" level=info msg="TearDown network for sandbox \"468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7\" successfully" Aug 19 00:15:33.750355 containerd[1525]: time="2025-08-19T00:15:33.750346285Z" level=info msg="StopPodSandbox for \"468584e08f2cf8056504a141086ccbe6735da306d29a52093454a78756c563c7\" returns successfully" Aug 19 00:15:33.750723 containerd[1525]: time="2025-08-19T00:15:33.750560866Z" level=info msg="TearDown network for sandbox \"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\" successfully" Aug 19 00:15:33.750723 containerd[1525]: time="2025-08-19T00:15:33.750587103Z" level=info msg="StopPodSandbox for \"3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d\" returns successfully" Aug 19 00:15:33.872224 kubelet[2657]: I0819 00:15:33.872115 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qlmvw\" (UniqueName: \"kubernetes.io/projected/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-kube-api-access-qlmvw\") pod \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " Aug 19 00:15:33.872224 kubelet[2657]: I0819 00:15:33.872155 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-bpf-maps\") pod \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " Aug 19 00:15:33.872224 kubelet[2657]: I0819 00:15:33.872182 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-hubble-tls\") pod \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " Aug 19 00:15:33.872224 kubelet[2657]: I0819 00:15:33.872198 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-xtables-lock\") pod \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " Aug 19 00:15:33.872224 kubelet[2657]: I0819 00:15:33.872223 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cilium-config-path\") pod \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " Aug 19 00:15:33.872598 kubelet[2657]: I0819 00:15:33.872239 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-host-proc-sys-kernel\") pod \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " Aug 19 00:15:33.872598 kubelet[2657]: I0819 00:15:33.872253 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cni-path\") pod \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " Aug 19 00:15:33.872598 kubelet[2657]: I0819 00:15:33.872267 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cilium-run\") pod \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " Aug 19 00:15:33.872598 kubelet[2657]: I0819 00:15:33.872285 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vjzr\" (UniqueName: \"kubernetes.io/projected/0132b988-8387-4c2f-b504-7a99353c7054-kube-api-access-7vjzr\") pod \"0132b988-8387-4c2f-b504-7a99353c7054\" (UID: \"0132b988-8387-4c2f-b504-7a99353c7054\") " Aug 19 00:15:33.872598 kubelet[2657]: I0819 00:15:33.872300 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cilium-cgroup\") pod \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " Aug 19 00:15:33.872598 kubelet[2657]: I0819 00:15:33.872313 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-host-proc-sys-net\") pod \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " Aug 19 00:15:33.872779 kubelet[2657]: I0819 00:15:33.872339 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-lib-modules\") pod \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " Aug 19 00:15:33.872779 kubelet[2657]: I0819 00:15:33.872379 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-etc-cni-netd\") pod \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " Aug 19 00:15:33.872779 kubelet[2657]: I0819 00:15:33.872399 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0132b988-8387-4c2f-b504-7a99353c7054-cilium-config-path\") pod \"0132b988-8387-4c2f-b504-7a99353c7054\" (UID: \"0132b988-8387-4c2f-b504-7a99353c7054\") " Aug 19 00:15:33.872779 kubelet[2657]: I0819 00:15:33.872416 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-clustermesh-secrets\") pod \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " Aug 19 00:15:33.872779 kubelet[2657]: I0819 00:15:33.872430 2657 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-hostproc\") pod \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\" (UID: \"bb4c2141-3b18-4b51-ba8d-63a1c2326c70\") " Aug 19 00:15:33.873204 kubelet[2657]: I0819 00:15:33.873162 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "bb4c2141-3b18-4b51-ba8d-63a1c2326c70" (UID: "bb4c2141-3b18-4b51-ba8d-63a1c2326c70"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:15:33.873262 kubelet[2657]: I0819 00:15:33.873177 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-hostproc" (OuterVolumeSpecName: "hostproc") pod "bb4c2141-3b18-4b51-ba8d-63a1c2326c70" (UID: "bb4c2141-3b18-4b51-ba8d-63a1c2326c70"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:15:33.875171 kubelet[2657]: I0819 00:15:33.873382 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "bb4c2141-3b18-4b51-ba8d-63a1c2326c70" (UID: "bb4c2141-3b18-4b51-ba8d-63a1c2326c70"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:15:33.875171 kubelet[2657]: I0819 00:15:33.873420 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "bb4c2141-3b18-4b51-ba8d-63a1c2326c70" (UID: "bb4c2141-3b18-4b51-ba8d-63a1c2326c70"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:15:33.875171 kubelet[2657]: I0819 00:15:33.873438 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "bb4c2141-3b18-4b51-ba8d-63a1c2326c70" (UID: "bb4c2141-3b18-4b51-ba8d-63a1c2326c70"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:15:33.875171 kubelet[2657]: I0819 00:15:33.873453 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "bb4c2141-3b18-4b51-ba8d-63a1c2326c70" (UID: "bb4c2141-3b18-4b51-ba8d-63a1c2326c70"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:15:33.875171 kubelet[2657]: I0819 00:15:33.873468 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "bb4c2141-3b18-4b51-ba8d-63a1c2326c70" (UID: "bb4c2141-3b18-4b51-ba8d-63a1c2326c70"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:15:33.875489 kubelet[2657]: I0819 00:15:33.875459 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "bb4c2141-3b18-4b51-ba8d-63a1c2326c70" (UID: "bb4c2141-3b18-4b51-ba8d-63a1c2326c70"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 19 00:15:33.877664 kubelet[2657]: I0819 00:15:33.877627 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0132b988-8387-4c2f-b504-7a99353c7054-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0132b988-8387-4c2f-b504-7a99353c7054" (UID: "0132b988-8387-4c2f-b504-7a99353c7054"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 19 00:15:33.877782 kubelet[2657]: I0819 00:15:33.877767 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "bb4c2141-3b18-4b51-ba8d-63a1c2326c70" (UID: "bb4c2141-3b18-4b51-ba8d-63a1c2326c70"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:15:33.883678 kubelet[2657]: I0819 00:15:33.883627 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "bb4c2141-3b18-4b51-ba8d-63a1c2326c70" (UID: "bb4c2141-3b18-4b51-ba8d-63a1c2326c70"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 00:15:33.883678 kubelet[2657]: I0819 00:15:33.883624 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "bb4c2141-3b18-4b51-ba8d-63a1c2326c70" (UID: "bb4c2141-3b18-4b51-ba8d-63a1c2326c70"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 19 00:15:33.883806 kubelet[2657]: I0819 00:15:33.883712 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cni-path" (OuterVolumeSpecName: "cni-path") pod "bb4c2141-3b18-4b51-ba8d-63a1c2326c70" (UID: "bb4c2141-3b18-4b51-ba8d-63a1c2326c70"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:15:33.883806 kubelet[2657]: I0819 00:15:33.883732 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "bb4c2141-3b18-4b51-ba8d-63a1c2326c70" (UID: "bb4c2141-3b18-4b51-ba8d-63a1c2326c70"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 19 00:15:33.884207 kubelet[2657]: I0819 00:15:33.884132 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-kube-api-access-qlmvw" (OuterVolumeSpecName: "kube-api-access-qlmvw") pod "bb4c2141-3b18-4b51-ba8d-63a1c2326c70" (UID: "bb4c2141-3b18-4b51-ba8d-63a1c2326c70"). InnerVolumeSpecName "kube-api-access-qlmvw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 00:15:33.887703 kubelet[2657]: I0819 00:15:33.887640 2657 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0132b988-8387-4c2f-b504-7a99353c7054-kube-api-access-7vjzr" (OuterVolumeSpecName: "kube-api-access-7vjzr") pod "0132b988-8387-4c2f-b504-7a99353c7054" (UID: "0132b988-8387-4c2f-b504-7a99353c7054"). InnerVolumeSpecName "kube-api-access-7vjzr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 19 00:15:33.894840 systemd[1]: Removed slice kubepods-burstable-podbb4c2141_3b18_4b51_ba8d_63a1c2326c70.slice - libcontainer container kubepods-burstable-podbb4c2141_3b18_4b51_ba8d_63a1c2326c70.slice. Aug 19 00:15:33.895301 systemd[1]: kubepods-burstable-podbb4c2141_3b18_4b51_ba8d_63a1c2326c70.slice: Consumed 7.482s CPU time, 124.9M memory peak, 160K read from disk, 12.9M written to disk. Aug 19 00:15:33.973044 kubelet[2657]: I0819 00:15:33.973005 2657 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-xtables-lock\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:33.973206 kubelet[2657]: I0819 00:15:33.973192 2657 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:33.973267 kubelet[2657]: I0819 00:15:33.973257 2657 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:33.973570 kubelet[2657]: I0819 00:15:33.973444 2657 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cni-path\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:33.973570 kubelet[2657]: I0819 00:15:33.973460 2657 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cilium-run\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:33.973570 kubelet[2657]: I0819 00:15:33.973469 2657 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7vjzr\" (UniqueName: \"kubernetes.io/projected/0132b988-8387-4c2f-b504-7a99353c7054-kube-api-access-7vjzr\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:33.973570 kubelet[2657]: I0819 00:15:33.973477 2657 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:33.973570 kubelet[2657]: I0819 00:15:33.973485 2657 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:33.973570 kubelet[2657]: I0819 00:15:33.973493 2657 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-lib-modules\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:33.973570 kubelet[2657]: I0819 00:15:33.973501 2657 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:33.973570 kubelet[2657]: I0819 00:15:33.973508 2657 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0132b988-8387-4c2f-b504-7a99353c7054-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:33.973770 kubelet[2657]: I0819 00:15:33.973516 2657 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:33.973770 kubelet[2657]: I0819 00:15:33.973524 2657 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-hostproc\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:33.973770 kubelet[2657]: I0819 00:15:33.973532 2657 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qlmvw\" (UniqueName: \"kubernetes.io/projected/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-kube-api-access-qlmvw\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:33.973770 kubelet[2657]: I0819 00:15:33.973541 2657 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-bpf-maps\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:33.973770 kubelet[2657]: I0819 00:15:33.973548 2657 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bb4c2141-3b18-4b51-ba8d-63a1c2326c70-hubble-tls\") on node \"localhost\" DevicePath \"\"" Aug 19 00:15:34.159250 kubelet[2657]: I0819 00:15:34.158773 2657 scope.go:117] "RemoveContainer" containerID="cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04" Aug 19 00:15:34.162163 containerd[1525]: time="2025-08-19T00:15:34.160768924Z" level=info msg="RemoveContainer for \"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\"" Aug 19 00:15:34.168631 systemd[1]: Removed slice kubepods-besteffort-pod0132b988_8387_4c2f_b504_7a99353c7054.slice - libcontainer container kubepods-besteffort-pod0132b988_8387_4c2f_b504_7a99353c7054.slice. Aug 19 00:15:34.170594 containerd[1525]: time="2025-08-19T00:15:34.169892093Z" level=info msg="RemoveContainer for \"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\" returns successfully" Aug 19 00:15:34.178120 kubelet[2657]: I0819 00:15:34.178079 2657 scope.go:117] "RemoveContainer" containerID="cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04" Aug 19 00:15:34.178878 containerd[1525]: time="2025-08-19T00:15:34.178782481Z" level=error msg="ContainerStatus for \"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\": not found" Aug 19 00:15:34.179049 kubelet[2657]: E0819 00:15:34.179024 2657 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\": not found" containerID="cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04" Aug 19 00:15:34.179205 kubelet[2657]: I0819 00:15:34.179064 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04"} err="failed to get container status \"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\": rpc error: code = NotFound desc = an error occurred when try to find container \"cda19c5f369103bd4c2d498047d87cedc2b3530e11701f0d6a6f5a2894111a04\": not found" Aug 19 00:15:34.181187 kubelet[2657]: I0819 00:15:34.181159 2657 scope.go:117] "RemoveContainer" containerID="f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3" Aug 19 00:15:34.184714 containerd[1525]: time="2025-08-19T00:15:34.184676156Z" level=info msg="RemoveContainer for \"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\"" Aug 19 00:15:34.189200 containerd[1525]: time="2025-08-19T00:15:34.189153548Z" level=info msg="RemoveContainer for \"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\" returns successfully" Aug 19 00:15:34.189425 kubelet[2657]: I0819 00:15:34.189388 2657 scope.go:117] "RemoveContainer" containerID="d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df" Aug 19 00:15:34.191119 containerd[1525]: time="2025-08-19T00:15:34.191067550Z" level=info msg="RemoveContainer for \"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df\"" Aug 19 00:15:34.196965 containerd[1525]: time="2025-08-19T00:15:34.196594255Z" level=info msg="RemoveContainer for \"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df\" returns successfully" Aug 19 00:15:34.197067 kubelet[2657]: I0819 00:15:34.196878 2657 scope.go:117] "RemoveContainer" containerID="dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06" Aug 19 00:15:34.200614 containerd[1525]: time="2025-08-19T00:15:34.199333070Z" level=info msg="RemoveContainer for \"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06\"" Aug 19 00:15:34.215213 containerd[1525]: time="2025-08-19T00:15:34.215095252Z" level=info msg="RemoveContainer for \"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06\" returns successfully" Aug 19 00:15:34.215509 kubelet[2657]: I0819 00:15:34.215470 2657 scope.go:117] "RemoveContainer" containerID="c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5" Aug 19 00:15:34.216917 containerd[1525]: time="2025-08-19T00:15:34.216879106Z" level=info msg="RemoveContainer for \"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5\"" Aug 19 00:15:34.220078 containerd[1525]: time="2025-08-19T00:15:34.220033326Z" level=info msg="RemoveContainer for \"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5\" returns successfully" Aug 19 00:15:34.220364 kubelet[2657]: I0819 00:15:34.220331 2657 scope.go:117] "RemoveContainer" containerID="c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c" Aug 19 00:15:34.222289 containerd[1525]: time="2025-08-19T00:15:34.222237504Z" level=info msg="RemoveContainer for \"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c\"" Aug 19 00:15:34.224867 containerd[1525]: time="2025-08-19T00:15:34.224812093Z" level=info msg="RemoveContainer for \"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c\" returns successfully" Aug 19 00:15:34.225077 kubelet[2657]: I0819 00:15:34.225039 2657 scope.go:117] "RemoveContainer" containerID="f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3" Aug 19 00:15:34.225392 containerd[1525]: time="2025-08-19T00:15:34.225307012Z" level=error msg="ContainerStatus for \"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\": not found" Aug 19 00:15:34.225546 kubelet[2657]: E0819 00:15:34.225513 2657 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\": not found" containerID="f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3" Aug 19 00:15:34.225581 kubelet[2657]: I0819 00:15:34.225548 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3"} err="failed to get container status \"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"f091024a628aec0ae925cfe87a7b8bda2f6bd9a8bfa8ba09d738b45676daa2a3\": not found" Aug 19 00:15:34.225581 kubelet[2657]: I0819 00:15:34.225570 2657 scope.go:117] "RemoveContainer" containerID="d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df" Aug 19 00:15:34.225769 containerd[1525]: time="2025-08-19T00:15:34.225732817Z" level=error msg="ContainerStatus for \"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df\": not found" Aug 19 00:15:34.225862 kubelet[2657]: E0819 00:15:34.225838 2657 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df\": not found" containerID="d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df" Aug 19 00:15:34.225896 kubelet[2657]: I0819 00:15:34.225866 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df"} err="failed to get container status \"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df\": rpc error: code = NotFound desc = an error occurred when try to find container \"d2b8f73561c42d8c05f0b9451ba03b9b808d179d04459df2bece7152888c51df\": not found" Aug 19 00:15:34.225896 kubelet[2657]: I0819 00:15:34.225889 2657 scope.go:117] "RemoveContainer" containerID="dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06" Aug 19 00:15:34.226055 containerd[1525]: time="2025-08-19T00:15:34.226027752Z" level=error msg="ContainerStatus for \"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06\": not found" Aug 19 00:15:34.226186 kubelet[2657]: E0819 00:15:34.226164 2657 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06\": not found" containerID="dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06" Aug 19 00:15:34.226229 kubelet[2657]: I0819 00:15:34.226191 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06"} err="failed to get container status \"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06\": rpc error: code = NotFound desc = an error occurred when try to find container \"dafc7ab15d045cfd011c120e802dba2bdd6649317dfc1260993322e2dd584c06\": not found" Aug 19 00:15:34.226229 kubelet[2657]: I0819 00:15:34.226209 2657 scope.go:117] "RemoveContainer" containerID="c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5" Aug 19 00:15:34.226452 containerd[1525]: time="2025-08-19T00:15:34.226382243Z" level=error msg="ContainerStatus for \"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5\": not found" Aug 19 00:15:34.226672 kubelet[2657]: E0819 00:15:34.226554 2657 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5\": not found" containerID="c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5" Aug 19 00:15:34.226672 kubelet[2657]: I0819 00:15:34.226582 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5"} err="failed to get container status \"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5\": rpc error: code = NotFound desc = an error occurred when try to find container \"c750b8de76d52b113e1e68ab959e825c4fe6b7703d559c05bc40688e536e04f5\": not found" Aug 19 00:15:34.226672 kubelet[2657]: I0819 00:15:34.226599 2657 scope.go:117] "RemoveContainer" containerID="c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c" Aug 19 00:15:34.226771 containerd[1525]: time="2025-08-19T00:15:34.226746373Z" level=error msg="ContainerStatus for \"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c\": not found" Aug 19 00:15:34.226872 kubelet[2657]: E0819 00:15:34.226852 2657 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c\": not found" containerID="c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c" Aug 19 00:15:34.226912 kubelet[2657]: I0819 00:15:34.226878 2657 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c"} err="failed to get container status \"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c\": rpc error: code = NotFound desc = an error occurred when try to find container \"c0fb6804da966a109d1013affd69204ddb64f9a3798cd0b55c376421026cf35c\": not found" Aug 19 00:15:34.661628 systemd[1]: var-lib-kubelet-pods-0132b988\x2d8387\x2d4c2f\x2db504\x2d7a99353c7054-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7vjzr.mount: Deactivated successfully. Aug 19 00:15:34.661726 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3bf32976a901ad27154fa9198e3d544e19050fd83563c54c298b0b07a2a7e63d-shm.mount: Deactivated successfully. Aug 19 00:15:34.661859 systemd[1]: var-lib-kubelet-pods-bb4c2141\x2d3b18\x2d4b51\x2dba8d\x2d63a1c2326c70-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqlmvw.mount: Deactivated successfully. Aug 19 00:15:34.661907 systemd[1]: var-lib-kubelet-pods-bb4c2141\x2d3b18\x2d4b51\x2dba8d\x2d63a1c2326c70-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Aug 19 00:15:34.661961 systemd[1]: var-lib-kubelet-pods-bb4c2141\x2d3b18\x2d4b51\x2dba8d\x2d63a1c2326c70-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Aug 19 00:15:35.556153 sshd[4274]: Connection closed by 10.0.0.1 port 39384 Aug 19 00:15:35.556100 sshd-session[4271]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:35.568249 systemd[1]: sshd@22-10.0.0.59:22-10.0.0.1:39384.service: Deactivated successfully. Aug 19 00:15:35.569884 systemd[1]: session-23.scope: Deactivated successfully. Aug 19 00:15:35.571850 systemd-logind[1510]: Session 23 logged out. Waiting for processes to exit. Aug 19 00:15:35.575318 systemd[1]: Started sshd@23-10.0.0.59:22-10.0.0.1:60436.service - OpenSSH per-connection server daemon (10.0.0.1:60436). Aug 19 00:15:35.577743 systemd-logind[1510]: Removed session 23. Aug 19 00:15:35.642294 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 60436 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:35.643554 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:35.648429 systemd-logind[1510]: New session 24 of user core. Aug 19 00:15:35.656321 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 19 00:15:35.867475 kubelet[2657]: I0819 00:15:35.867369 2657 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0132b988-8387-4c2f-b504-7a99353c7054" path="/var/lib/kubelet/pods/0132b988-8387-4c2f-b504-7a99353c7054/volumes" Aug 19 00:15:35.868020 kubelet[2657]: I0819 00:15:35.867759 2657 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb4c2141-3b18-4b51-ba8d-63a1c2326c70" path="/var/lib/kubelet/pods/bb4c2141-3b18-4b51-ba8d-63a1c2326c70/volumes" Aug 19 00:15:36.677461 sshd[4431]: Connection closed by 10.0.0.1 port 60436 Aug 19 00:15:36.677909 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:36.688950 systemd[1]: sshd@23-10.0.0.59:22-10.0.0.1:60436.service: Deactivated successfully. Aug 19 00:15:36.692423 systemd[1]: session-24.scope: Deactivated successfully. Aug 19 00:15:36.693861 systemd-logind[1510]: Session 24 logged out. Waiting for processes to exit. Aug 19 00:15:36.699920 systemd[1]: Started sshd@24-10.0.0.59:22-10.0.0.1:60450.service - OpenSSH per-connection server daemon (10.0.0.1:60450). Aug 19 00:15:36.701196 systemd-logind[1510]: Removed session 24. Aug 19 00:15:36.719452 kubelet[2657]: I0819 00:15:36.719409 2657 memory_manager.go:355] "RemoveStaleState removing state" podUID="0132b988-8387-4c2f-b504-7a99353c7054" containerName="cilium-operator" Aug 19 00:15:36.720472 kubelet[2657]: I0819 00:15:36.719601 2657 memory_manager.go:355] "RemoveStaleState removing state" podUID="bb4c2141-3b18-4b51-ba8d-63a1c2326c70" containerName="cilium-agent" Aug 19 00:15:36.733042 systemd[1]: Created slice kubepods-burstable-podef060321_6616_4e59_b14b_51dc480921ce.slice - libcontainer container kubepods-burstable-podef060321_6616_4e59_b14b_51dc480921ce.slice. Aug 19 00:15:36.765677 sshd[4443]: Accepted publickey for core from 10.0.0.1 port 60450 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:36.767388 sshd-session[4443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:36.771977 systemd-logind[1510]: New session 25 of user core. Aug 19 00:15:36.779315 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 19 00:15:36.830971 sshd[4446]: Connection closed by 10.0.0.1 port 60450 Aug 19 00:15:36.829769 sshd-session[4443]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:36.841546 systemd[1]: sshd@24-10.0.0.59:22-10.0.0.1:60450.service: Deactivated successfully. Aug 19 00:15:36.844546 systemd[1]: session-25.scope: Deactivated successfully. Aug 19 00:15:36.845221 systemd-logind[1510]: Session 25 logged out. Waiting for processes to exit. Aug 19 00:15:36.848281 systemd[1]: Started sshd@25-10.0.0.59:22-10.0.0.1:60458.service - OpenSSH per-connection server daemon (10.0.0.1:60458). Aug 19 00:15:36.849178 systemd-logind[1510]: Removed session 25. Aug 19 00:15:36.864728 kubelet[2657]: E0819 00:15:36.864697 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:15:36.890657 kubelet[2657]: I0819 00:15:36.890608 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef060321-6616-4e59-b14b-51dc480921ce-xtables-lock\") pod \"cilium-g7lr6\" (UID: \"ef060321-6616-4e59-b14b-51dc480921ce\") " pod="kube-system/cilium-g7lr6" Aug 19 00:15:36.890657 kubelet[2657]: I0819 00:15:36.890662 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef060321-6616-4e59-b14b-51dc480921ce-lib-modules\") pod \"cilium-g7lr6\" (UID: \"ef060321-6616-4e59-b14b-51dc480921ce\") " pod="kube-system/cilium-g7lr6" Aug 19 00:15:36.891030 kubelet[2657]: I0819 00:15:36.890694 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h2sc\" (UniqueName: \"kubernetes.io/projected/ef060321-6616-4e59-b14b-51dc480921ce-kube-api-access-2h2sc\") pod \"cilium-g7lr6\" (UID: \"ef060321-6616-4e59-b14b-51dc480921ce\") " pod="kube-system/cilium-g7lr6" Aug 19 00:15:36.891030 kubelet[2657]: I0819 00:15:36.890724 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ef060321-6616-4e59-b14b-51dc480921ce-cilium-run\") pod \"cilium-g7lr6\" (UID: \"ef060321-6616-4e59-b14b-51dc480921ce\") " pod="kube-system/cilium-g7lr6" Aug 19 00:15:36.891030 kubelet[2657]: I0819 00:15:36.890771 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ef060321-6616-4e59-b14b-51dc480921ce-host-proc-sys-net\") pod \"cilium-g7lr6\" (UID: \"ef060321-6616-4e59-b14b-51dc480921ce\") " pod="kube-system/cilium-g7lr6" Aug 19 00:15:36.891030 kubelet[2657]: I0819 00:15:36.890787 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ef060321-6616-4e59-b14b-51dc480921ce-cni-path\") pod \"cilium-g7lr6\" (UID: \"ef060321-6616-4e59-b14b-51dc480921ce\") " pod="kube-system/cilium-g7lr6" Aug 19 00:15:36.891030 kubelet[2657]: I0819 00:15:36.890802 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ef060321-6616-4e59-b14b-51dc480921ce-etc-cni-netd\") pod \"cilium-g7lr6\" (UID: \"ef060321-6616-4e59-b14b-51dc480921ce\") " pod="kube-system/cilium-g7lr6" Aug 19 00:15:36.891030 kubelet[2657]: I0819 00:15:36.890828 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ef060321-6616-4e59-b14b-51dc480921ce-cilium-ipsec-secrets\") pod \"cilium-g7lr6\" (UID: \"ef060321-6616-4e59-b14b-51dc480921ce\") " pod="kube-system/cilium-g7lr6" Aug 19 00:15:36.891225 kubelet[2657]: I0819 00:15:36.890857 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ef060321-6616-4e59-b14b-51dc480921ce-cilium-cgroup\") pod \"cilium-g7lr6\" (UID: \"ef060321-6616-4e59-b14b-51dc480921ce\") " pod="kube-system/cilium-g7lr6" Aug 19 00:15:36.891225 kubelet[2657]: I0819 00:15:36.890874 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ef060321-6616-4e59-b14b-51dc480921ce-hostproc\") pod \"cilium-g7lr6\" (UID: \"ef060321-6616-4e59-b14b-51dc480921ce\") " pod="kube-system/cilium-g7lr6" Aug 19 00:15:36.891225 kubelet[2657]: I0819 00:15:36.890889 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ef060321-6616-4e59-b14b-51dc480921ce-clustermesh-secrets\") pod \"cilium-g7lr6\" (UID: \"ef060321-6616-4e59-b14b-51dc480921ce\") " pod="kube-system/cilium-g7lr6" Aug 19 00:15:36.891225 kubelet[2657]: I0819 00:15:36.890920 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ef060321-6616-4e59-b14b-51dc480921ce-host-proc-sys-kernel\") pod \"cilium-g7lr6\" (UID: \"ef060321-6616-4e59-b14b-51dc480921ce\") " pod="kube-system/cilium-g7lr6" Aug 19 00:15:36.891225 kubelet[2657]: I0819 00:15:36.890937 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ef060321-6616-4e59-b14b-51dc480921ce-bpf-maps\") pod \"cilium-g7lr6\" (UID: \"ef060321-6616-4e59-b14b-51dc480921ce\") " pod="kube-system/cilium-g7lr6" Aug 19 00:15:36.891225 kubelet[2657]: I0819 00:15:36.890953 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ef060321-6616-4e59-b14b-51dc480921ce-cilium-config-path\") pod \"cilium-g7lr6\" (UID: \"ef060321-6616-4e59-b14b-51dc480921ce\") " pod="kube-system/cilium-g7lr6" Aug 19 00:15:36.891376 kubelet[2657]: I0819 00:15:36.890966 2657 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ef060321-6616-4e59-b14b-51dc480921ce-hubble-tls\") pod \"cilium-g7lr6\" (UID: \"ef060321-6616-4e59-b14b-51dc480921ce\") " pod="kube-system/cilium-g7lr6" Aug 19 00:15:36.913265 sshd[4453]: Accepted publickey for core from 10.0.0.1 port 60458 ssh2: RSA SHA256:KtdM7F0JALreH0qQbeHxcUClgTXNHNzWeYwdEyvS3QA Aug 19 00:15:36.915687 sshd-session[4453]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 19 00:15:36.921334 systemd-logind[1510]: New session 26 of user core. Aug 19 00:15:36.940319 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 19 00:15:37.037345 kubelet[2657]: E0819 00:15:37.037278 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:15:37.037919 containerd[1525]: time="2025-08-19T00:15:37.037886239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g7lr6,Uid:ef060321-6616-4e59-b14b-51dc480921ce,Namespace:kube-system,Attempt:0,}" Aug 19 00:15:37.058782 containerd[1525]: time="2025-08-19T00:15:37.058686799Z" level=info msg="connecting to shim 1d1e6c6d814f817670d29d64cd6e5ac85a4a9a49dc2935673e4217510d0d0b0b" address="unix:///run/containerd/s/cb4b454377d649c886b61335f05d1273a52c85e7c50b90b5f309c0fc0d172424" namespace=k8s.io protocol=ttrpc version=3 Aug 19 00:15:37.095378 systemd[1]: Started cri-containerd-1d1e6c6d814f817670d29d64cd6e5ac85a4a9a49dc2935673e4217510d0d0b0b.scope - libcontainer container 1d1e6c6d814f817670d29d64cd6e5ac85a4a9a49dc2935673e4217510d0d0b0b. Aug 19 00:15:37.118570 containerd[1525]: time="2025-08-19T00:15:37.118531331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-g7lr6,Uid:ef060321-6616-4e59-b14b-51dc480921ce,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d1e6c6d814f817670d29d64cd6e5ac85a4a9a49dc2935673e4217510d0d0b0b\"" Aug 19 00:15:37.119391 kubelet[2657]: E0819 00:15:37.119358 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:15:37.124142 containerd[1525]: time="2025-08-19T00:15:37.124024001Z" level=info msg="CreateContainer within sandbox \"1d1e6c6d814f817670d29d64cd6e5ac85a4a9a49dc2935673e4217510d0d0b0b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Aug 19 00:15:37.135505 containerd[1525]: time="2025-08-19T00:15:37.135394476Z" level=info msg="Container 6f14e3c73b208c2a6ca490c29700766cac0887459a5ea4bcd81005e153c8d439: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:15:37.140872 containerd[1525]: time="2025-08-19T00:15:37.140813311Z" level=info msg="CreateContainer within sandbox \"1d1e6c6d814f817670d29d64cd6e5ac85a4a9a49dc2935673e4217510d0d0b0b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6f14e3c73b208c2a6ca490c29700766cac0887459a5ea4bcd81005e153c8d439\"" Aug 19 00:15:37.141436 containerd[1525]: time="2025-08-19T00:15:37.141384672Z" level=info msg="StartContainer for \"6f14e3c73b208c2a6ca490c29700766cac0887459a5ea4bcd81005e153c8d439\"" Aug 19 00:15:37.142800 containerd[1525]: time="2025-08-19T00:15:37.142712063Z" level=info msg="connecting to shim 6f14e3c73b208c2a6ca490c29700766cac0887459a5ea4bcd81005e153c8d439" address="unix:///run/containerd/s/cb4b454377d649c886b61335f05d1273a52c85e7c50b90b5f309c0fc0d172424" protocol=ttrpc version=3 Aug 19 00:15:37.180345 systemd[1]: Started cri-containerd-6f14e3c73b208c2a6ca490c29700766cac0887459a5ea4bcd81005e153c8d439.scope - libcontainer container 6f14e3c73b208c2a6ca490c29700766cac0887459a5ea4bcd81005e153c8d439. Aug 19 00:15:37.210447 containerd[1525]: time="2025-08-19T00:15:37.210009053Z" level=info msg="StartContainer for \"6f14e3c73b208c2a6ca490c29700766cac0887459a5ea4bcd81005e153c8d439\" returns successfully" Aug 19 00:15:37.252000 systemd[1]: cri-containerd-6f14e3c73b208c2a6ca490c29700766cac0887459a5ea4bcd81005e153c8d439.scope: Deactivated successfully. Aug 19 00:15:37.255313 containerd[1525]: time="2025-08-19T00:15:37.255267846Z" level=info msg="received exit event container_id:\"6f14e3c73b208c2a6ca490c29700766cac0887459a5ea4bcd81005e153c8d439\" id:\"6f14e3c73b208c2a6ca490c29700766cac0887459a5ea4bcd81005e153c8d439\" pid:4526 exited_at:{seconds:1755562537 nanos:254918550}" Aug 19 00:15:37.255404 containerd[1525]: time="2025-08-19T00:15:37.255366360Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6f14e3c73b208c2a6ca490c29700766cac0887459a5ea4bcd81005e153c8d439\" id:\"6f14e3c73b208c2a6ca490c29700766cac0887459a5ea4bcd81005e153c8d439\" pid:4526 exited_at:{seconds:1755562537 nanos:254918550}" Aug 19 00:15:37.942629 kubelet[2657]: E0819 00:15:37.942571 2657 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Aug 19 00:15:38.197430 kubelet[2657]: E0819 00:15:38.196906 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:15:38.201590 containerd[1525]: time="2025-08-19T00:15:38.201561126Z" level=info msg="CreateContainer within sandbox \"1d1e6c6d814f817670d29d64cd6e5ac85a4a9a49dc2935673e4217510d0d0b0b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Aug 19 00:15:38.214836 containerd[1525]: time="2025-08-19T00:15:38.214786538Z" level=info msg="Container f6042fcda5d2d07966e4c73f8368576ac0dc68598b49ee6e3e83cae2b08c5be8: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:15:38.221351 containerd[1525]: time="2025-08-19T00:15:38.221288731Z" level=info msg="CreateContainer within sandbox \"1d1e6c6d814f817670d29d64cd6e5ac85a4a9a49dc2935673e4217510d0d0b0b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f6042fcda5d2d07966e4c73f8368576ac0dc68598b49ee6e3e83cae2b08c5be8\"" Aug 19 00:15:38.222122 containerd[1525]: time="2025-08-19T00:15:38.222063922Z" level=info msg="StartContainer for \"f6042fcda5d2d07966e4c73f8368576ac0dc68598b49ee6e3e83cae2b08c5be8\"" Aug 19 00:15:38.222962 containerd[1525]: time="2025-08-19T00:15:38.222930388Z" level=info msg="connecting to shim f6042fcda5d2d07966e4c73f8368576ac0dc68598b49ee6e3e83cae2b08c5be8" address="unix:///run/containerd/s/cb4b454377d649c886b61335f05d1273a52c85e7c50b90b5f309c0fc0d172424" protocol=ttrpc version=3 Aug 19 00:15:38.250340 systemd[1]: Started cri-containerd-f6042fcda5d2d07966e4c73f8368576ac0dc68598b49ee6e3e83cae2b08c5be8.scope - libcontainer container f6042fcda5d2d07966e4c73f8368576ac0dc68598b49ee6e3e83cae2b08c5be8. Aug 19 00:15:38.277818 containerd[1525]: time="2025-08-19T00:15:38.277769113Z" level=info msg="StartContainer for \"f6042fcda5d2d07966e4c73f8368576ac0dc68598b49ee6e3e83cae2b08c5be8\" returns successfully" Aug 19 00:15:38.281911 systemd[1]: cri-containerd-f6042fcda5d2d07966e4c73f8368576ac0dc68598b49ee6e3e83cae2b08c5be8.scope: Deactivated successfully. Aug 19 00:15:38.283059 containerd[1525]: time="2025-08-19T00:15:38.282653448Z" level=info msg="received exit event container_id:\"f6042fcda5d2d07966e4c73f8368576ac0dc68598b49ee6e3e83cae2b08c5be8\" id:\"f6042fcda5d2d07966e4c73f8368576ac0dc68598b49ee6e3e83cae2b08c5be8\" pid:4572 exited_at:{seconds:1755562538 nanos:282452340}" Aug 19 00:15:38.283059 containerd[1525]: time="2025-08-19T00:15:38.282882073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6042fcda5d2d07966e4c73f8368576ac0dc68598b49ee6e3e83cae2b08c5be8\" id:\"f6042fcda5d2d07966e4c73f8368576ac0dc68598b49ee6e3e83cae2b08c5be8\" pid:4572 exited_at:{seconds:1755562538 nanos:282452340}" Aug 19 00:15:38.299403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f6042fcda5d2d07966e4c73f8368576ac0dc68598b49ee6e3e83cae2b08c5be8-rootfs.mount: Deactivated successfully. Aug 19 00:15:38.865333 kubelet[2657]: E0819 00:15:38.865218 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:15:39.204193 kubelet[2657]: E0819 00:15:39.204141 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:15:39.207537 containerd[1525]: time="2025-08-19T00:15:39.207491187Z" level=info msg="CreateContainer within sandbox \"1d1e6c6d814f817670d29d64cd6e5ac85a4a9a49dc2935673e4217510d0d0b0b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Aug 19 00:15:39.226327 containerd[1525]: time="2025-08-19T00:15:39.226282295Z" level=info msg="Container 9d733dfc89ae46aa161fc8f31390e4363aaf9505afa8045f57587754173d3e47: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:15:39.235602 containerd[1525]: time="2025-08-19T00:15:39.235542798Z" level=info msg="CreateContainer within sandbox \"1d1e6c6d814f817670d29d64cd6e5ac85a4a9a49dc2935673e4217510d0d0b0b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9d733dfc89ae46aa161fc8f31390e4363aaf9505afa8045f57587754173d3e47\"" Aug 19 00:15:39.236937 containerd[1525]: time="2025-08-19T00:15:39.236887839Z" level=info msg="StartContainer for \"9d733dfc89ae46aa161fc8f31390e4363aaf9505afa8045f57587754173d3e47\"" Aug 19 00:15:39.240128 containerd[1525]: time="2025-08-19T00:15:39.239899704Z" level=info msg="connecting to shim 9d733dfc89ae46aa161fc8f31390e4363aaf9505afa8045f57587754173d3e47" address="unix:///run/containerd/s/cb4b454377d649c886b61335f05d1273a52c85e7c50b90b5f309c0fc0d172424" protocol=ttrpc version=3 Aug 19 00:15:39.267330 systemd[1]: Started cri-containerd-9d733dfc89ae46aa161fc8f31390e4363aaf9505afa8045f57587754173d3e47.scope - libcontainer container 9d733dfc89ae46aa161fc8f31390e4363aaf9505afa8045f57587754173d3e47. Aug 19 00:15:39.302775 systemd[1]: cri-containerd-9d733dfc89ae46aa161fc8f31390e4363aaf9505afa8045f57587754173d3e47.scope: Deactivated successfully. Aug 19 00:15:39.305385 containerd[1525]: time="2025-08-19T00:15:39.305350583Z" level=info msg="StartContainer for \"9d733dfc89ae46aa161fc8f31390e4363aaf9505afa8045f57587754173d3e47\" returns successfully" Aug 19 00:15:39.307265 containerd[1525]: time="2025-08-19T00:15:39.306608630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9d733dfc89ae46aa161fc8f31390e4363aaf9505afa8045f57587754173d3e47\" id:\"9d733dfc89ae46aa161fc8f31390e4363aaf9505afa8045f57587754173d3e47\" pid:4620 exited_at:{seconds:1755562539 nanos:306367204}" Aug 19 00:15:39.307265 containerd[1525]: time="2025-08-19T00:15:39.306682546Z" level=info msg="received exit event container_id:\"9d733dfc89ae46aa161fc8f31390e4363aaf9505afa8045f57587754173d3e47\" id:\"9d733dfc89ae46aa161fc8f31390e4363aaf9505afa8045f57587754173d3e47\" pid:4620 exited_at:{seconds:1755562539 nanos:306367204}" Aug 19 00:15:39.331376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d733dfc89ae46aa161fc8f31390e4363aaf9505afa8045f57587754173d3e47-rootfs.mount: Deactivated successfully. Aug 19 00:15:39.545250 kubelet[2657]: I0819 00:15:39.545124 2657 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-08-19T00:15:39Z","lastTransitionTime":"2025-08-19T00:15:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Aug 19 00:15:40.208605 kubelet[2657]: E0819 00:15:40.208573 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:15:40.213653 containerd[1525]: time="2025-08-19T00:15:40.213611008Z" level=info msg="CreateContainer within sandbox \"1d1e6c6d814f817670d29d64cd6e5ac85a4a9a49dc2935673e4217510d0d0b0b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Aug 19 00:15:40.228165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573450692.mount: Deactivated successfully. Aug 19 00:15:40.244303 containerd[1525]: time="2025-08-19T00:15:40.244248843Z" level=info msg="Container 735e9e420bfa5eaf9798d884a684b138f64ab76f01feee8a5811e1bda08d27e8: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:15:40.255989 containerd[1525]: time="2025-08-19T00:15:40.253094249Z" level=info msg="CreateContainer within sandbox \"1d1e6c6d814f817670d29d64cd6e5ac85a4a9a49dc2935673e4217510d0d0b0b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"735e9e420bfa5eaf9798d884a684b138f64ab76f01feee8a5811e1bda08d27e8\"" Aug 19 00:15:40.256700 containerd[1525]: time="2025-08-19T00:15:40.256381112Z" level=info msg="StartContainer for \"735e9e420bfa5eaf9798d884a684b138f64ab76f01feee8a5811e1bda08d27e8\"" Aug 19 00:15:40.257674 containerd[1525]: time="2025-08-19T00:15:40.257633685Z" level=info msg="connecting to shim 735e9e420bfa5eaf9798d884a684b138f64ab76f01feee8a5811e1bda08d27e8" address="unix:///run/containerd/s/cb4b454377d649c886b61335f05d1273a52c85e7c50b90b5f309c0fc0d172424" protocol=ttrpc version=3 Aug 19 00:15:40.286380 systemd[1]: Started cri-containerd-735e9e420bfa5eaf9798d884a684b138f64ab76f01feee8a5811e1bda08d27e8.scope - libcontainer container 735e9e420bfa5eaf9798d884a684b138f64ab76f01feee8a5811e1bda08d27e8. Aug 19 00:15:40.311359 systemd[1]: cri-containerd-735e9e420bfa5eaf9798d884a684b138f64ab76f01feee8a5811e1bda08d27e8.scope: Deactivated successfully. Aug 19 00:15:40.312674 containerd[1525]: time="2025-08-19T00:15:40.312255033Z" level=info msg="TaskExit event in podsandbox handler container_id:\"735e9e420bfa5eaf9798d884a684b138f64ab76f01feee8a5811e1bda08d27e8\" id:\"735e9e420bfa5eaf9798d884a684b138f64ab76f01feee8a5811e1bda08d27e8\" pid:4658 exited_at:{seconds:1755562540 nanos:311540511}" Aug 19 00:15:40.312835 containerd[1525]: time="2025-08-19T00:15:40.312315150Z" level=info msg="received exit event container_id:\"735e9e420bfa5eaf9798d884a684b138f64ab76f01feee8a5811e1bda08d27e8\" id:\"735e9e420bfa5eaf9798d884a684b138f64ab76f01feee8a5811e1bda08d27e8\" pid:4658 exited_at:{seconds:1755562540 nanos:311540511}" Aug 19 00:15:40.322794 containerd[1525]: time="2025-08-19T00:15:40.322753389Z" level=info msg="StartContainer for \"735e9e420bfa5eaf9798d884a684b138f64ab76f01feee8a5811e1bda08d27e8\" returns successfully" Aug 19 00:15:40.865514 kubelet[2657]: E0819 00:15:40.865102 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:15:41.222367 kubelet[2657]: E0819 00:15:41.222315 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:15:41.225856 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-735e9e420bfa5eaf9798d884a684b138f64ab76f01feee8a5811e1bda08d27e8-rootfs.mount: Deactivated successfully. Aug 19 00:15:41.230408 containerd[1525]: time="2025-08-19T00:15:41.230345008Z" level=info msg="CreateContainer within sandbox \"1d1e6c6d814f817670d29d64cd6e5ac85a4a9a49dc2935673e4217510d0d0b0b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Aug 19 00:15:41.246800 containerd[1525]: time="2025-08-19T00:15:41.246748038Z" level=info msg="Container a0620f6b71d0231a53b463a32c76823e87e38ecd791448ebde49d60d908f8ba7: CDI devices from CRI Config.CDIDevices: []" Aug 19 00:15:41.257924 containerd[1525]: time="2025-08-19T00:15:41.257875528Z" level=info msg="CreateContainer within sandbox \"1d1e6c6d814f817670d29d64cd6e5ac85a4a9a49dc2935673e4217510d0d0b0b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a0620f6b71d0231a53b463a32c76823e87e38ecd791448ebde49d60d908f8ba7\"" Aug 19 00:15:41.258554 containerd[1525]: time="2025-08-19T00:15:41.258527895Z" level=info msg="StartContainer for \"a0620f6b71d0231a53b463a32c76823e87e38ecd791448ebde49d60d908f8ba7\"" Aug 19 00:15:41.259485 containerd[1525]: time="2025-08-19T00:15:41.259446570Z" level=info msg="connecting to shim a0620f6b71d0231a53b463a32c76823e87e38ecd791448ebde49d60d908f8ba7" address="unix:///run/containerd/s/cb4b454377d649c886b61335f05d1273a52c85e7c50b90b5f309c0fc0d172424" protocol=ttrpc version=3 Aug 19 00:15:41.284328 systemd[1]: Started cri-containerd-a0620f6b71d0231a53b463a32c76823e87e38ecd791448ebde49d60d908f8ba7.scope - libcontainer container a0620f6b71d0231a53b463a32c76823e87e38ecd791448ebde49d60d908f8ba7. Aug 19 00:15:41.317598 containerd[1525]: time="2025-08-19T00:15:41.317549619Z" level=info msg="StartContainer for \"a0620f6b71d0231a53b463a32c76823e87e38ecd791448ebde49d60d908f8ba7\" returns successfully" Aug 19 00:15:41.382901 containerd[1525]: time="2025-08-19T00:15:41.382858991Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0620f6b71d0231a53b463a32c76823e87e38ecd791448ebde49d60d908f8ba7\" id:\"b1fa4e7e580cc1fea853395740b02a3f014fc0cf92719838e557635163d5fdd6\" pid:4725 exited_at:{seconds:1755562541 nanos:382426293}" Aug 19 00:15:41.636158 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Aug 19 00:15:42.233255 kubelet[2657]: E0819 00:15:42.233209 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:15:43.233791 kubelet[2657]: E0819 00:15:43.233748 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:15:43.449126 containerd[1525]: time="2025-08-19T00:15:43.449053980Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0620f6b71d0231a53b463a32c76823e87e38ecd791448ebde49d60d908f8ba7\" id:\"3c6f1dcf8f935c18ed56f0ba05463b7a06bdd87afbf0eae33e15d135ae543c4b\" pid:4890 exit_status:1 exited_at:{seconds:1755562543 nanos:448091620}" Aug 19 00:15:44.237141 kubelet[2657]: E0819 00:15:44.237083 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:15:44.636072 systemd-networkd[1427]: lxc_health: Link UP Aug 19 00:15:44.640295 systemd-networkd[1427]: lxc_health: Gained carrier Aug 19 00:15:45.068064 kubelet[2657]: I0819 00:15:45.067989 2657 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-g7lr6" podStartSLOduration=9.067970452 podStartE2EDuration="9.067970452s" podCreationTimestamp="2025-08-19 00:15:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-19 00:15:42.25082681 +0000 UTC m=+84.521640432" watchObservedRunningTime="2025-08-19 00:15:45.067970452 +0000 UTC m=+87.338784074" Aug 19 00:15:45.239822 kubelet[2657]: E0819 00:15:45.239777 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:15:45.585787 containerd[1525]: time="2025-08-19T00:15:45.585730188Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0620f6b71d0231a53b463a32c76823e87e38ecd791448ebde49d60d908f8ba7\" id:\"46dfb771759c6aeebdb72418f6afb0fec0f3cc440ec2d5621e7d2e25383b566d\" pid:5261 exited_at:{seconds:1755562545 nanos:585321882}" Aug 19 00:15:45.998295 systemd-networkd[1427]: lxc_health: Gained IPv6LL Aug 19 00:15:46.243967 kubelet[2657]: E0819 00:15:46.243634 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:15:47.246369 kubelet[2657]: E0819 00:15:47.246326 2657 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Aug 19 00:15:47.777340 containerd[1525]: time="2025-08-19T00:15:47.777289996Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0620f6b71d0231a53b463a32c76823e87e38ecd791448ebde49d60d908f8ba7\" id:\"dc50e46467e1eed8ef99a7077f777c777ba26d870fd68bfeccdba54a2cf7a2eb\" pid:5295 exited_at:{seconds:1755562547 nanos:776781449}" Aug 19 00:15:49.897717 containerd[1525]: time="2025-08-19T00:15:49.897603788Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0620f6b71d0231a53b463a32c76823e87e38ecd791448ebde49d60d908f8ba7\" id:\"3ad3b5d7fed1e4037659a2d28842074a95dc548d285a365735dde92f804c99b9\" pid:5325 exited_at:{seconds:1755562549 nanos:897278995}" Aug 19 00:15:52.034451 containerd[1525]: time="2025-08-19T00:15:52.034335010Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a0620f6b71d0231a53b463a32c76823e87e38ecd791448ebde49d60d908f8ba7\" id:\"9711a0ac36cac6d13f41410832798f8f37c2341beb70782a553baa685af41d82\" pid:5347 exited_at:{seconds:1755562552 nanos:33844375}" Aug 19 00:15:52.036580 kubelet[2657]: E0819 00:15:52.036434 2657 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57598->127.0.0.1:33911: write tcp 127.0.0.1:57598->127.0.0.1:33911: write: broken pipe Aug 19 00:15:52.060278 sshd[4456]: Connection closed by 10.0.0.1 port 60458 Aug 19 00:15:52.060788 sshd-session[4453]: pam_unix(sshd:session): session closed for user core Aug 19 00:15:52.065284 systemd[1]: sshd@25-10.0.0.59:22-10.0.0.1:60458.service: Deactivated successfully. Aug 19 00:15:52.067218 systemd[1]: session-26.scope: Deactivated successfully. Aug 19 00:15:52.069769 systemd-logind[1510]: Session 26 logged out. Waiting for processes to exit. Aug 19 00:15:52.070910 systemd-logind[1510]: Removed session 26.