Sep 8 23:42:28.770108 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 8 23:42:28.770129 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Mon Sep 8 22:16:14 -00 2025 Sep 8 23:42:28.770138 kernel: KASLR enabled Sep 8 23:42:28.770144 kernel: efi: EFI v2.7 by EDK II Sep 8 23:42:28.770149 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Sep 8 23:42:28.770167 kernel: random: crng init done Sep 8 23:42:28.770194 kernel: secureboot: Secure boot disabled Sep 8 23:42:28.770201 kernel: ACPI: Early table checksum verification disabled Sep 8 23:42:28.770208 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Sep 8 23:42:28.770216 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 8 23:42:28.770222 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:42:28.770228 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:42:28.770233 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:42:28.770239 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:42:28.770246 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:42:28.770253 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:42:28.770260 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:42:28.770266 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:42:28.770272 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 8 23:42:28.770278 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 8 23:42:28.770283 kernel: ACPI: Use ACPI SPCR as default console: No Sep 8 23:42:28.770290 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:42:28.770296 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Sep 8 23:42:28.770302 kernel: Zone ranges: Sep 8 23:42:28.770308 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:42:28.770315 kernel: DMA32 empty Sep 8 23:42:28.770321 kernel: Normal empty Sep 8 23:42:28.770327 kernel: Device empty Sep 8 23:42:28.770332 kernel: Movable zone start for each node Sep 8 23:42:28.770338 kernel: Early memory node ranges Sep 8 23:42:28.770345 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Sep 8 23:42:28.770351 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Sep 8 23:42:28.770357 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Sep 8 23:42:28.770363 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Sep 8 23:42:28.770368 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Sep 8 23:42:28.770374 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Sep 8 23:42:28.770380 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Sep 8 23:42:28.770387 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Sep 8 23:42:28.770393 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Sep 8 23:42:28.770399 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Sep 8 23:42:28.770408 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Sep 8 23:42:28.770414 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Sep 8 23:42:28.770420 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 8 23:42:28.770428 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 8 23:42:28.770435 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 8 23:42:28.770441 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Sep 8 23:42:28.770447 kernel: psci: probing for conduit method from ACPI. Sep 8 23:42:28.770454 kernel: psci: PSCIv1.1 detected in firmware. Sep 8 23:42:28.770460 kernel: psci: Using standard PSCI v0.2 function IDs Sep 8 23:42:28.770466 kernel: psci: Trusted OS migration not required Sep 8 23:42:28.770473 kernel: psci: SMC Calling Convention v1.1 Sep 8 23:42:28.770479 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 8 23:42:28.770485 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 8 23:42:28.770493 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 8 23:42:28.770500 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 8 23:42:28.770506 kernel: Detected PIPT I-cache on CPU0 Sep 8 23:42:28.770512 kernel: CPU features: detected: GIC system register CPU interface Sep 8 23:42:28.770519 kernel: CPU features: detected: Spectre-v4 Sep 8 23:42:28.770525 kernel: CPU features: detected: Spectre-BHB Sep 8 23:42:28.770531 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 8 23:42:28.770538 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 8 23:42:28.770544 kernel: CPU features: detected: ARM erratum 1418040 Sep 8 23:42:28.770550 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 8 23:42:28.770556 kernel: alternatives: applying boot alternatives Sep 8 23:42:28.770564 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9b288da65d4b75f0b3fa549b2137666f5efe4c54bbf9c99d6059072c88732f23 Sep 8 23:42:28.770572 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 8 23:42:28.770578 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 8 23:42:28.770584 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 8 23:42:28.770591 kernel: Fallback order for Node 0: 0 Sep 8 23:42:28.770597 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 8 23:42:28.770603 kernel: Policy zone: DMA Sep 8 23:42:28.770610 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 8 23:42:28.770616 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 8 23:42:28.770623 kernel: software IO TLB: area num 4. Sep 8 23:42:28.770629 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 8 23:42:28.770635 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Sep 8 23:42:28.770643 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 8 23:42:28.770655 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 8 23:42:28.770662 kernel: rcu: RCU event tracing is enabled. Sep 8 23:42:28.770669 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 8 23:42:28.770675 kernel: Trampoline variant of Tasks RCU enabled. Sep 8 23:42:28.770681 kernel: Tracing variant of Tasks RCU enabled. Sep 8 23:42:28.770688 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 8 23:42:28.770694 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 8 23:42:28.770701 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:42:28.770708 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 8 23:42:28.770714 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 8 23:42:28.770723 kernel: GICv3: 256 SPIs implemented Sep 8 23:42:28.770729 kernel: GICv3: 0 Extended SPIs implemented Sep 8 23:42:28.770735 kernel: Root IRQ handler: gic_handle_irq Sep 8 23:42:28.770742 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 8 23:42:28.770748 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 8 23:42:28.770754 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 8 23:42:28.770761 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 8 23:42:28.770767 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 8 23:42:28.770773 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 8 23:42:28.770780 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 8 23:42:28.770786 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 8 23:42:28.770792 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 8 23:42:28.770800 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:42:28.770806 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 8 23:42:28.770813 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 8 23:42:28.770819 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 8 23:42:28.770826 kernel: arm-pv: using stolen time PV Sep 8 23:42:28.770832 kernel: Console: colour dummy device 80x25 Sep 8 23:42:28.770839 kernel: ACPI: Core revision 20240827 Sep 8 23:42:28.770846 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 8 23:42:28.770852 kernel: pid_max: default: 32768 minimum: 301 Sep 8 23:42:28.770859 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 8 23:42:28.770866 kernel: landlock: Up and running. Sep 8 23:42:28.770873 kernel: SELinux: Initializing. Sep 8 23:42:28.770879 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:42:28.770886 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 8 23:42:28.770892 kernel: rcu: Hierarchical SRCU implementation. Sep 8 23:42:28.770899 kernel: rcu: Max phase no-delay instances is 400. Sep 8 23:42:28.770905 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 8 23:42:28.770912 kernel: Remapping and enabling EFI services. Sep 8 23:42:28.770918 kernel: smp: Bringing up secondary CPUs ... Sep 8 23:42:28.770930 kernel: Detected PIPT I-cache on CPU1 Sep 8 23:42:28.770937 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 8 23:42:28.770944 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 8 23:42:28.770952 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:42:28.770959 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 8 23:42:28.770965 kernel: Detected PIPT I-cache on CPU2 Sep 8 23:42:28.770972 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 8 23:42:28.770979 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 8 23:42:28.770988 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:42:28.770994 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 8 23:42:28.771001 kernel: Detected PIPT I-cache on CPU3 Sep 8 23:42:28.771008 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 8 23:42:28.771015 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 8 23:42:28.771022 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 8 23:42:28.771029 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 8 23:42:28.771035 kernel: smp: Brought up 1 node, 4 CPUs Sep 8 23:42:28.771042 kernel: SMP: Total of 4 processors activated. Sep 8 23:42:28.771050 kernel: CPU: All CPU(s) started at EL1 Sep 8 23:42:28.771057 kernel: CPU features: detected: 32-bit EL0 Support Sep 8 23:42:28.771064 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 8 23:42:28.771071 kernel: CPU features: detected: Common not Private translations Sep 8 23:42:28.771078 kernel: CPU features: detected: CRC32 instructions Sep 8 23:42:28.771085 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 8 23:42:28.771092 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 8 23:42:28.771098 kernel: CPU features: detected: LSE atomic instructions Sep 8 23:42:28.771105 kernel: CPU features: detected: Privileged Access Never Sep 8 23:42:28.771112 kernel: CPU features: detected: RAS Extension Support Sep 8 23:42:28.771120 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 8 23:42:28.771127 kernel: alternatives: applying system-wide alternatives Sep 8 23:42:28.771134 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 8 23:42:28.771141 kernel: Memory: 2424480K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 38976K init, 1038K bss, 125472K reserved, 16384K cma-reserved) Sep 8 23:42:28.771148 kernel: devtmpfs: initialized Sep 8 23:42:28.771184 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 8 23:42:28.771194 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 8 23:42:28.771201 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 8 23:42:28.771210 kernel: 0 pages in range for non-PLT usage Sep 8 23:42:28.771217 kernel: 508560 pages in range for PLT usage Sep 8 23:42:28.771224 kernel: pinctrl core: initialized pinctrl subsystem Sep 8 23:42:28.771231 kernel: SMBIOS 3.0.0 present. Sep 8 23:42:28.771238 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 8 23:42:28.771244 kernel: DMI: Memory slots populated: 1/1 Sep 8 23:42:28.771251 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 8 23:42:28.771258 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 8 23:42:28.771265 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 8 23:42:28.771274 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 8 23:42:28.771281 kernel: audit: initializing netlink subsys (disabled) Sep 8 23:42:28.771288 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Sep 8 23:42:28.771295 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 8 23:42:28.771301 kernel: cpuidle: using governor menu Sep 8 23:42:28.771308 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 8 23:42:28.771315 kernel: ASID allocator initialised with 32768 entries Sep 8 23:42:28.771322 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 8 23:42:28.771329 kernel: Serial: AMBA PL011 UART driver Sep 8 23:42:28.771337 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 8 23:42:28.771344 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 8 23:42:28.771351 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 8 23:42:28.771357 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 8 23:42:28.771364 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 8 23:42:28.771371 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 8 23:42:28.771378 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 8 23:42:28.771385 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 8 23:42:28.771391 kernel: ACPI: Added _OSI(Module Device) Sep 8 23:42:28.771398 kernel: ACPI: Added _OSI(Processor Device) Sep 8 23:42:28.771406 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 8 23:42:28.771413 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 8 23:42:28.771420 kernel: ACPI: Interpreter enabled Sep 8 23:42:28.771427 kernel: ACPI: Using GIC for interrupt routing Sep 8 23:42:28.771434 kernel: ACPI: MCFG table detected, 1 entries Sep 8 23:42:28.771440 kernel: ACPI: CPU0 has been hot-added Sep 8 23:42:28.771447 kernel: ACPI: CPU1 has been hot-added Sep 8 23:42:28.771454 kernel: ACPI: CPU2 has been hot-added Sep 8 23:42:28.771460 kernel: ACPI: CPU3 has been hot-added Sep 8 23:42:28.771469 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 8 23:42:28.771476 kernel: printk: legacy console [ttyAMA0] enabled Sep 8 23:42:28.771483 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 8 23:42:28.771606 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 8 23:42:28.771683 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 8 23:42:28.771743 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 8 23:42:28.771801 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 8 23:42:28.771861 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 8 23:42:28.771870 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 8 23:42:28.771877 kernel: PCI host bridge to bus 0000:00 Sep 8 23:42:28.771945 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 8 23:42:28.771998 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 8 23:42:28.772050 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 8 23:42:28.772101 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 8 23:42:28.772203 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 8 23:42:28.772277 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 8 23:42:28.772340 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 8 23:42:28.772401 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 8 23:42:28.772460 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 8 23:42:28.772520 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 8 23:42:28.772578 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 8 23:42:28.772639 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 8 23:42:28.772707 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 8 23:42:28.772760 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 8 23:42:28.772813 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 8 23:42:28.772822 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 8 23:42:28.772829 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 8 23:42:28.772836 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 8 23:42:28.772845 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 8 23:42:28.772852 kernel: iommu: Default domain type: Translated Sep 8 23:42:28.772859 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 8 23:42:28.772865 kernel: efivars: Registered efivars operations Sep 8 23:42:28.772872 kernel: vgaarb: loaded Sep 8 23:42:28.772879 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 8 23:42:28.772886 kernel: VFS: Disk quotas dquot_6.6.0 Sep 8 23:42:28.772893 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 8 23:42:28.772900 kernel: pnp: PnP ACPI init Sep 8 23:42:28.772967 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 8 23:42:28.772977 kernel: pnp: PnP ACPI: found 1 devices Sep 8 23:42:28.772984 kernel: NET: Registered PF_INET protocol family Sep 8 23:42:28.772991 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 8 23:42:28.772998 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 8 23:42:28.773005 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 8 23:42:28.773012 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 8 23:42:28.773019 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 8 23:42:28.773028 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 8 23:42:28.773035 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:42:28.773042 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 8 23:42:28.773049 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 8 23:42:28.773056 kernel: PCI: CLS 0 bytes, default 64 Sep 8 23:42:28.773062 kernel: kvm [1]: HYP mode not available Sep 8 23:42:28.773069 kernel: Initialise system trusted keyrings Sep 8 23:42:28.773076 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 8 23:42:28.773083 kernel: Key type asymmetric registered Sep 8 23:42:28.773090 kernel: Asymmetric key parser 'x509' registered Sep 8 23:42:28.773097 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 8 23:42:28.773104 kernel: io scheduler mq-deadline registered Sep 8 23:42:28.773111 kernel: io scheduler kyber registered Sep 8 23:42:28.773118 kernel: io scheduler bfq registered Sep 8 23:42:28.773125 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 8 23:42:28.773132 kernel: ACPI: button: Power Button [PWRB] Sep 8 23:42:28.773139 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 8 23:42:28.773215 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 8 23:42:28.773226 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 8 23:42:28.773235 kernel: thunder_xcv, ver 1.0 Sep 8 23:42:28.773242 kernel: thunder_bgx, ver 1.0 Sep 8 23:42:28.773248 kernel: nicpf, ver 1.0 Sep 8 23:42:28.773255 kernel: nicvf, ver 1.0 Sep 8 23:42:28.773321 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 8 23:42:28.773377 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-08T23:42:28 UTC (1757374948) Sep 8 23:42:28.773386 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 8 23:42:28.773393 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 8 23:42:28.773401 kernel: watchdog: NMI not fully supported Sep 8 23:42:28.773408 kernel: watchdog: Hard watchdog permanently disabled Sep 8 23:42:28.773415 kernel: NET: Registered PF_INET6 protocol family Sep 8 23:42:28.773422 kernel: Segment Routing with IPv6 Sep 8 23:42:28.773429 kernel: In-situ OAM (IOAM) with IPv6 Sep 8 23:42:28.773436 kernel: NET: Registered PF_PACKET protocol family Sep 8 23:42:28.773442 kernel: Key type dns_resolver registered Sep 8 23:42:28.773449 kernel: registered taskstats version 1 Sep 8 23:42:28.773456 kernel: Loading compiled-in X.509 certificates Sep 8 23:42:28.773464 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 47cbee3f94dbdda6bd0b2aeb4a40d87813458eab' Sep 8 23:42:28.773471 kernel: Demotion targets for Node 0: null Sep 8 23:42:28.773478 kernel: Key type .fscrypt registered Sep 8 23:42:28.773484 kernel: Key type fscrypt-provisioning registered Sep 8 23:42:28.773491 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 8 23:42:28.773498 kernel: ima: Allocated hash algorithm: sha1 Sep 8 23:42:28.773505 kernel: ima: No architecture policies found Sep 8 23:42:28.773512 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 8 23:42:28.773520 kernel: clk: Disabling unused clocks Sep 8 23:42:28.773527 kernel: PM: genpd: Disabling unused power domains Sep 8 23:42:28.773533 kernel: Warning: unable to open an initial console. Sep 8 23:42:28.773540 kernel: Freeing unused kernel memory: 38976K Sep 8 23:42:28.773547 kernel: Run /init as init process Sep 8 23:42:28.773554 kernel: with arguments: Sep 8 23:42:28.773561 kernel: /init Sep 8 23:42:28.773567 kernel: with environment: Sep 8 23:42:28.773574 kernel: HOME=/ Sep 8 23:42:28.773581 kernel: TERM=linux Sep 8 23:42:28.773589 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 8 23:42:28.773596 systemd[1]: Successfully made /usr/ read-only. Sep 8 23:42:28.773606 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:42:28.773614 systemd[1]: Detected virtualization kvm. Sep 8 23:42:28.773621 systemd[1]: Detected architecture arm64. Sep 8 23:42:28.773629 systemd[1]: Running in initrd. Sep 8 23:42:28.773636 systemd[1]: No hostname configured, using default hostname. Sep 8 23:42:28.773654 systemd[1]: Hostname set to . Sep 8 23:42:28.773663 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:42:28.773670 systemd[1]: Queued start job for default target initrd.target. Sep 8 23:42:28.773677 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:42:28.773685 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:42:28.773693 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 8 23:42:28.773701 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:42:28.773708 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 8 23:42:28.773718 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 8 23:42:28.773727 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 8 23:42:28.773735 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 8 23:42:28.773742 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:42:28.773750 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:42:28.773758 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:42:28.773765 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:42:28.773774 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:42:28.773781 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:42:28.773789 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:42:28.773796 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:42:28.773804 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 8 23:42:28.773811 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 8 23:42:28.773819 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:42:28.773827 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:42:28.773835 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:42:28.773843 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:42:28.773850 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 8 23:42:28.773858 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:42:28.773865 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 8 23:42:28.773873 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 8 23:42:28.773881 systemd[1]: Starting systemd-fsck-usr.service... Sep 8 23:42:28.773888 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:42:28.773896 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:42:28.773904 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:42:28.773912 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 8 23:42:28.773920 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:42:28.773927 systemd[1]: Finished systemd-fsck-usr.service. Sep 8 23:42:28.773936 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 8 23:42:28.773960 systemd-journald[245]: Collecting audit messages is disabled. Sep 8 23:42:28.773978 systemd-journald[245]: Journal started Sep 8 23:42:28.773997 systemd-journald[245]: Runtime Journal (/run/log/journal/57fec169290f4acc9d9f02624e1fcad0) is 6M, max 48.5M, 42.4M free. Sep 8 23:42:28.766123 systemd-modules-load[246]: Inserted module 'overlay' Sep 8 23:42:28.779001 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 8 23:42:28.779018 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:42:28.780344 systemd-modules-load[246]: Inserted module 'br_netfilter' Sep 8 23:42:28.781686 kernel: Bridge firewalling registered Sep 8 23:42:28.781702 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:42:28.782689 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:42:28.785193 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 8 23:42:28.788328 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 8 23:42:28.789710 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:42:28.792315 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:42:28.796476 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:42:28.802696 systemd-tmpfiles[272]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 8 23:42:28.803245 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:42:28.805100 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:42:28.807443 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:42:28.810411 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:42:28.812659 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:42:28.814307 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 8 23:42:28.836759 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9b288da65d4b75f0b3fa549b2137666f5efe4c54bbf9c99d6059072c88732f23 Sep 8 23:42:28.849868 systemd-resolved[286]: Positive Trust Anchors: Sep 8 23:42:28.849883 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:42:28.849914 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:42:28.854534 systemd-resolved[286]: Defaulting to hostname 'linux'. Sep 8 23:42:28.855422 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:42:28.858412 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:42:28.906185 kernel: SCSI subsystem initialized Sep 8 23:42:28.911177 kernel: Loading iSCSI transport class v2.0-870. Sep 8 23:42:28.918174 kernel: iscsi: registered transport (tcp) Sep 8 23:42:28.930225 kernel: iscsi: registered transport (qla4xxx) Sep 8 23:42:28.930245 kernel: QLogic iSCSI HBA Driver Sep 8 23:42:28.945877 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:42:28.961893 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:42:28.963761 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:42:29.003705 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 8 23:42:29.005722 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 8 23:42:29.060188 kernel: raid6: neonx8 gen() 15771 MB/s Sep 8 23:42:29.077173 kernel: raid6: neonx4 gen() 15758 MB/s Sep 8 23:42:29.094172 kernel: raid6: neonx2 gen() 13168 MB/s Sep 8 23:42:29.111172 kernel: raid6: neonx1 gen() 10422 MB/s Sep 8 23:42:29.128174 kernel: raid6: int64x8 gen() 6897 MB/s Sep 8 23:42:29.145168 kernel: raid6: int64x4 gen() 7306 MB/s Sep 8 23:42:29.162171 kernel: raid6: int64x2 gen() 6095 MB/s Sep 8 23:42:29.179181 kernel: raid6: int64x1 gen() 5047 MB/s Sep 8 23:42:29.179198 kernel: raid6: using algorithm neonx8 gen() 15771 MB/s Sep 8 23:42:29.196193 kernel: raid6: .... xor() 12041 MB/s, rmw enabled Sep 8 23:42:29.196222 kernel: raid6: using neon recovery algorithm Sep 8 23:42:29.201175 kernel: xor: measuring software checksum speed Sep 8 23:42:29.201192 kernel: 8regs : 20982 MB/sec Sep 8 23:42:29.202179 kernel: 32regs : 19860 MB/sec Sep 8 23:42:29.202204 kernel: arm64_neon : 28051 MB/sec Sep 8 23:42:29.202221 kernel: xor: using function: arm64_neon (28051 MB/sec) Sep 8 23:42:29.254187 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 8 23:42:29.260846 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:42:29.263208 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:42:29.294210 systemd-udevd[498]: Using default interface naming scheme 'v255'. Sep 8 23:42:29.298400 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:42:29.300790 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 8 23:42:29.329961 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 8 23:42:29.352046 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:42:29.356273 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:42:29.412503 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:42:29.414968 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 8 23:42:29.460923 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 8 23:42:29.461120 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 8 23:42:29.478456 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 8 23:42:29.478499 kernel: GPT:9289727 != 19775487 Sep 8 23:42:29.479193 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 8 23:42:29.480349 kernel: GPT:9289727 != 19775487 Sep 8 23:42:29.480378 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 8 23:42:29.480388 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:42:29.484399 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:42:29.484514 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:42:29.487264 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:42:29.489404 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:42:29.503028 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 8 23:42:29.518936 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 8 23:42:29.527701 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 8 23:42:29.529876 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:42:29.537920 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:42:29.544499 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 8 23:42:29.545464 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 8 23:42:29.547973 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:42:29.549818 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:42:29.551610 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:42:29.554043 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 8 23:42:29.555689 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 8 23:42:29.570472 disk-uuid[590]: Primary Header is updated. Sep 8 23:42:29.570472 disk-uuid[590]: Secondary Entries is updated. Sep 8 23:42:29.570472 disk-uuid[590]: Secondary Header is updated. Sep 8 23:42:29.574629 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:42:29.577227 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:42:30.582202 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 8 23:42:30.582533 disk-uuid[595]: The operation has completed successfully. Sep 8 23:42:30.614597 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 8 23:42:30.614731 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 8 23:42:30.634660 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 8 23:42:30.668290 sh[609]: Success Sep 8 23:42:30.681820 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 8 23:42:30.681877 kernel: device-mapper: uevent: version 1.0.3 Sep 8 23:42:30.681887 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 8 23:42:30.690180 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 8 23:42:30.720400 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 8 23:42:30.722081 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 8 23:42:30.733741 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 8 23:42:30.740258 kernel: BTRFS: device fsid 034f8af6-cbd9-419d-a71e-a7d9edddc941 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (621) Sep 8 23:42:30.742167 kernel: BTRFS info (device dm-0): first mount of filesystem 034f8af6-cbd9-419d-a71e-a7d9edddc941 Sep 8 23:42:30.742196 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:42:30.746318 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 8 23:42:30.746365 kernel: BTRFS info (device dm-0): enabling free space tree Sep 8 23:42:30.747319 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 8 23:42:30.748551 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 8 23:42:30.749863 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 8 23:42:30.750737 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 8 23:42:30.766889 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 8 23:42:30.782931 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (650) Sep 8 23:42:30.782985 kernel: BTRFS info (device vda6): first mount of filesystem 0737dc78-a948-430a-939d-c5a2ab8b0159 Sep 8 23:42:30.782995 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:42:30.786536 kernel: BTRFS info (device vda6): turning on async discard Sep 8 23:42:30.786584 kernel: BTRFS info (device vda6): enabling free space tree Sep 8 23:42:30.791176 kernel: BTRFS info (device vda6): last unmount of filesystem 0737dc78-a948-430a-939d-c5a2ab8b0159 Sep 8 23:42:30.791495 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 8 23:42:30.793249 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 8 23:42:30.884193 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:42:30.887180 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:42:30.912314 ignition[691]: Ignition 2.21.0 Sep 8 23:42:30.912326 ignition[691]: Stage: fetch-offline Sep 8 23:42:30.912360 ignition[691]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:42:30.912367 ignition[691]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:42:30.912621 ignition[691]: parsed url from cmdline: "" Sep 8 23:42:30.912625 ignition[691]: no config URL provided Sep 8 23:42:30.912630 ignition[691]: reading system config file "/usr/lib/ignition/user.ign" Sep 8 23:42:30.912636 ignition[691]: no config at "/usr/lib/ignition/user.ign" Sep 8 23:42:30.912670 ignition[691]: op(1): [started] loading QEMU firmware config module Sep 8 23:42:30.912675 ignition[691]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 8 23:42:30.920380 ignition[691]: op(1): [finished] loading QEMU firmware config module Sep 8 23:42:30.929451 systemd-networkd[803]: lo: Link UP Sep 8 23:42:30.929466 systemd-networkd[803]: lo: Gained carrier Sep 8 23:42:30.930154 systemd-networkd[803]: Enumeration completed Sep 8 23:42:30.930554 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:42:30.930558 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:42:30.931319 systemd-networkd[803]: eth0: Link UP Sep 8 23:42:30.931460 systemd-networkd[803]: eth0: Gained carrier Sep 8 23:42:30.931469 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:42:30.932543 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:42:30.934122 systemd[1]: Reached target network.target - Network. Sep 8 23:42:30.954197 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:42:30.976145 ignition[691]: parsing config with SHA512: d957205d4ac8978783011a3bafad8b61179ba5429aba49542917de0c8efb8f38ccf84537c0dd82bf6f5d91143d3f40309d2e7ee1905242795cf49bf92d46ee8b Sep 8 23:42:30.980758 unknown[691]: fetched base config from "system" Sep 8 23:42:30.981148 ignition[691]: fetch-offline: fetch-offline passed Sep 8 23:42:30.980767 unknown[691]: fetched user config from "qemu" Sep 8 23:42:30.981221 ignition[691]: Ignition finished successfully Sep 8 23:42:30.983959 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:42:30.985218 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 8 23:42:30.985969 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 8 23:42:31.023140 ignition[811]: Ignition 2.21.0 Sep 8 23:42:31.023170 ignition[811]: Stage: kargs Sep 8 23:42:31.023319 ignition[811]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:42:31.023328 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:42:31.024994 ignition[811]: kargs: kargs passed Sep 8 23:42:31.025059 ignition[811]: Ignition finished successfully Sep 8 23:42:31.029612 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 8 23:42:31.031520 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 8 23:42:31.070445 ignition[818]: Ignition 2.21.0 Sep 8 23:42:31.070462 ignition[818]: Stage: disks Sep 8 23:42:31.070604 ignition[818]: no configs at "/usr/lib/ignition/base.d" Sep 8 23:42:31.070614 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:42:31.074609 ignition[818]: disks: disks passed Sep 8 23:42:31.075266 ignition[818]: Ignition finished successfully Sep 8 23:42:31.076970 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 8 23:42:31.079046 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 8 23:42:31.080072 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 8 23:42:31.081993 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:42:31.083821 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:42:31.085440 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:42:31.087803 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 8 23:42:31.122365 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 8 23:42:31.126069 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 8 23:42:31.129267 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 8 23:42:31.186168 kernel: EXT4-fs (vda9): mounted filesystem 1106ba00-3c53-4741-ace6-b77ffd1f2115 r/w with ordered data mode. Quota mode: none. Sep 8 23:42:31.186733 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 8 23:42:31.187880 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 8 23:42:31.190118 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:42:31.191868 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 8 23:42:31.192739 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 8 23:42:31.192780 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 8 23:42:31.192803 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:42:31.204984 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 8 23:42:31.208296 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 8 23:42:31.212264 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (836) Sep 8 23:42:31.212293 kernel: BTRFS info (device vda6): first mount of filesystem 0737dc78-a948-430a-939d-c5a2ab8b0159 Sep 8 23:42:31.212304 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:42:31.214278 kernel: BTRFS info (device vda6): turning on async discard Sep 8 23:42:31.214311 kernel: BTRFS info (device vda6): enabling free space tree Sep 8 23:42:31.215824 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:42:31.244586 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Sep 8 23:42:31.248651 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Sep 8 23:42:31.252077 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Sep 8 23:42:31.255822 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Sep 8 23:42:31.328245 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 8 23:42:31.330457 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 8 23:42:31.332009 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 8 23:42:31.351189 kernel: BTRFS info (device vda6): last unmount of filesystem 0737dc78-a948-430a-939d-c5a2ab8b0159 Sep 8 23:42:31.364344 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 8 23:42:31.377916 ignition[951]: INFO : Ignition 2.21.0 Sep 8 23:42:31.377916 ignition[951]: INFO : Stage: mount Sep 8 23:42:31.379420 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:42:31.379420 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:42:31.379420 ignition[951]: INFO : mount: mount passed Sep 8 23:42:31.379420 ignition[951]: INFO : Ignition finished successfully Sep 8 23:42:31.380797 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 8 23:42:31.382717 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 8 23:42:31.907321 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 8 23:42:31.908767 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 8 23:42:31.936231 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (963) Sep 8 23:42:31.938578 kernel: BTRFS info (device vda6): first mount of filesystem 0737dc78-a948-430a-939d-c5a2ab8b0159 Sep 8 23:42:31.938610 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 8 23:42:31.942880 kernel: BTRFS info (device vda6): turning on async discard Sep 8 23:42:31.942917 kernel: BTRFS info (device vda6): enabling free space tree Sep 8 23:42:31.944564 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 8 23:42:31.968981 ignition[981]: INFO : Ignition 2.21.0 Sep 8 23:42:31.968981 ignition[981]: INFO : Stage: files Sep 8 23:42:31.970359 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:42:31.970359 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:42:31.974221 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Sep 8 23:42:31.974221 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 8 23:42:31.974221 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 8 23:42:31.974221 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 8 23:42:31.979492 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 8 23:42:31.979492 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 8 23:42:31.979492 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 8 23:42:31.979492 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 8 23:42:31.974829 unknown[981]: wrote ssh authorized keys file for user: core Sep 8 23:42:32.045858 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 8 23:42:32.159276 systemd-networkd[803]: eth0: Gained IPv6LL Sep 8 23:42:32.376309 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 8 23:42:32.378541 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:42:32.378541 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 8 23:42:32.589114 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 8 23:42:32.743221 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 8 23:42:32.743221 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 8 23:42:32.746854 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 8 23:42:32.746854 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:42:32.746854 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 8 23:42:32.746854 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:42:32.746854 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 8 23:42:32.746854 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:42:32.746854 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 8 23:42:32.746854 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:42:32.746854 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 8 23:42:32.746854 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 8 23:42:32.763772 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 8 23:42:32.763772 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 8 23:42:32.763772 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 8 23:42:32.982750 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 8 23:42:33.705436 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 8 23:42:33.705436 ignition[981]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 8 23:42:33.708890 ignition[981]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:42:33.708890 ignition[981]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 8 23:42:33.708890 ignition[981]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 8 23:42:33.708890 ignition[981]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 8 23:42:33.708890 ignition[981]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:42:33.708890 ignition[981]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 8 23:42:33.708890 ignition[981]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 8 23:42:33.708890 ignition[981]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 8 23:42:33.729079 ignition[981]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:42:33.732684 ignition[981]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 8 23:42:33.735242 ignition[981]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 8 23:42:33.735242 ignition[981]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 8 23:42:33.735242 ignition[981]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 8 23:42:33.735242 ignition[981]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:42:33.735242 ignition[981]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 8 23:42:33.735242 ignition[981]: INFO : files: files passed Sep 8 23:42:33.735242 ignition[981]: INFO : Ignition finished successfully Sep 8 23:42:33.736607 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 8 23:42:33.739903 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 8 23:42:33.741711 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 8 23:42:33.753839 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 8 23:42:33.753953 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 8 23:42:33.757258 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Sep 8 23:42:33.761786 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:42:33.761786 initrd-setup-root-after-ignition[1011]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:42:33.765862 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 8 23:42:33.767019 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:42:33.768933 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 8 23:42:33.771458 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 8 23:42:33.819099 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 8 23:42:33.819237 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 8 23:42:33.821263 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 8 23:42:33.822957 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 8 23:42:33.824563 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 8 23:42:33.825290 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 8 23:42:33.863200 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:42:33.868720 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 8 23:42:33.892357 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:42:33.893447 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:42:33.895271 systemd[1]: Stopped target timers.target - Timer Units. Sep 8 23:42:33.896812 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 8 23:42:33.896950 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 8 23:42:33.899203 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 8 23:42:33.900933 systemd[1]: Stopped target basic.target - Basic System. Sep 8 23:42:33.902250 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 8 23:42:33.903728 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 8 23:42:33.905244 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 8 23:42:33.907034 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 8 23:42:33.908759 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 8 23:42:33.910365 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 8 23:42:33.911977 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 8 23:42:33.913652 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 8 23:42:33.915039 systemd[1]: Stopped target swap.target - Swaps. Sep 8 23:42:33.916289 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 8 23:42:33.916414 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 8 23:42:33.918518 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:42:33.920603 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:42:33.922462 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 8 23:42:33.926482 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:42:33.928014 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 8 23:42:33.928234 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 8 23:42:33.930988 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 8 23:42:33.931099 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 8 23:42:33.932842 systemd[1]: Stopped target paths.target - Path Units. Sep 8 23:42:33.937750 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 8 23:42:33.941279 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:42:33.942454 systemd[1]: Stopped target slices.target - Slice Units. Sep 8 23:42:33.944399 systemd[1]: Stopped target sockets.target - Socket Units. Sep 8 23:42:33.945843 systemd[1]: iscsid.socket: Deactivated successfully. Sep 8 23:42:33.945931 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 8 23:42:33.947265 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 8 23:42:33.947339 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 8 23:42:33.948822 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 8 23:42:33.948945 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 8 23:42:33.950433 systemd[1]: ignition-files.service: Deactivated successfully. Sep 8 23:42:33.950561 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 8 23:42:33.952745 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 8 23:42:33.954182 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 8 23:42:33.954302 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:42:33.957089 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 8 23:42:33.958522 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 8 23:42:33.958650 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:42:33.960288 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 8 23:42:33.960387 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 8 23:42:33.967514 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 8 23:42:33.967596 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 8 23:42:33.975506 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 8 23:42:33.980676 ignition[1036]: INFO : Ignition 2.21.0 Sep 8 23:42:33.980676 ignition[1036]: INFO : Stage: umount Sep 8 23:42:33.982249 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 8 23:42:33.982249 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 8 23:42:33.982249 ignition[1036]: INFO : umount: umount passed Sep 8 23:42:33.982249 ignition[1036]: INFO : Ignition finished successfully Sep 8 23:42:33.983820 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 8 23:42:33.985654 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 8 23:42:33.987473 systemd[1]: Stopped target network.target - Network. Sep 8 23:42:33.988757 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 8 23:42:33.988815 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 8 23:42:33.991175 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 8 23:42:33.991232 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 8 23:42:33.992735 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 8 23:42:33.992784 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 8 23:42:33.994412 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 8 23:42:33.994450 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 8 23:42:33.996114 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 8 23:42:33.997933 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 8 23:42:34.003321 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 8 23:42:34.003427 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 8 23:42:34.013438 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 8 23:42:34.013758 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 8 23:42:34.013800 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:42:34.017861 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 8 23:42:34.018072 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 8 23:42:34.018175 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 8 23:42:34.020828 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 8 23:42:34.021226 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 8 23:42:34.023147 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 8 23:42:34.023205 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:42:34.025948 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 8 23:42:34.026797 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 8 23:42:34.026856 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 8 23:42:34.029240 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:42:34.029297 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:42:34.031966 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 8 23:42:34.032008 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 8 23:42:34.034392 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:42:34.037659 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 8 23:42:34.051146 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 8 23:42:34.051305 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 8 23:42:34.052996 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 8 23:42:34.054319 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 8 23:42:34.055890 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 8 23:42:34.055940 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 8 23:42:34.057736 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 8 23:42:34.057859 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:42:34.059600 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 8 23:42:34.059674 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 8 23:42:34.061046 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 8 23:42:34.061082 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:42:34.062702 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 8 23:42:34.062750 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 8 23:42:34.065662 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 8 23:42:34.065719 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 8 23:42:34.068253 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 8 23:42:34.068307 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 8 23:42:34.071762 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 8 23:42:34.072690 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 8 23:42:34.072755 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:42:34.076013 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 8 23:42:34.076058 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:42:34.079049 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 8 23:42:34.079096 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:42:34.094556 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 8 23:42:34.094684 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 8 23:42:34.096753 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 8 23:42:34.099191 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 8 23:42:34.129533 systemd[1]: Switching root. Sep 8 23:42:34.161467 systemd-journald[245]: Journal stopped Sep 8 23:42:34.919078 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 8 23:42:34.919133 kernel: SELinux: policy capability network_peer_controls=1 Sep 8 23:42:34.919153 kernel: SELinux: policy capability open_perms=1 Sep 8 23:42:34.919180 kernel: SELinux: policy capability extended_socket_class=1 Sep 8 23:42:34.919190 kernel: SELinux: policy capability always_check_network=0 Sep 8 23:42:34.919199 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 8 23:42:34.919209 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 8 23:42:34.919219 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 8 23:42:34.919228 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 8 23:42:34.919237 kernel: SELinux: policy capability userspace_initial_context=0 Sep 8 23:42:34.919246 kernel: audit: type=1403 audit(1757374954.324:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 8 23:42:34.919265 systemd[1]: Successfully loaded SELinux policy in 32.093ms. Sep 8 23:42:34.919285 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.110ms. Sep 8 23:42:34.919296 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 8 23:42:34.919307 systemd[1]: Detected virtualization kvm. Sep 8 23:42:34.919320 systemd[1]: Detected architecture arm64. Sep 8 23:42:34.919330 systemd[1]: Detected first boot. Sep 8 23:42:34.919340 systemd[1]: Initializing machine ID from VM UUID. Sep 8 23:42:34.919350 zram_generator::config[1082]: No configuration found. Sep 8 23:42:34.919363 kernel: NET: Registered PF_VSOCK protocol family Sep 8 23:42:34.919372 systemd[1]: Populated /etc with preset unit settings. Sep 8 23:42:34.919383 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 8 23:42:34.919393 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 8 23:42:34.919403 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 8 23:42:34.919412 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 8 23:42:34.919422 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 8 23:42:34.919432 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 8 23:42:34.919442 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 8 23:42:34.919454 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 8 23:42:34.919465 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 8 23:42:34.919475 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 8 23:42:34.919485 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 8 23:42:34.919494 systemd[1]: Created slice user.slice - User and Session Slice. Sep 8 23:42:34.919504 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 8 23:42:34.919514 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 8 23:42:34.919524 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 8 23:42:34.919536 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 8 23:42:34.919546 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 8 23:42:34.919556 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 8 23:42:34.919566 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 8 23:42:34.919577 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 8 23:42:34.919587 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 8 23:42:34.919597 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 8 23:42:34.919607 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 8 23:42:34.919619 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 8 23:42:34.919634 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 8 23:42:34.919646 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 8 23:42:34.919656 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 8 23:42:34.919667 systemd[1]: Reached target slices.target - Slice Units. Sep 8 23:42:34.919677 systemd[1]: Reached target swap.target - Swaps. Sep 8 23:42:34.919687 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 8 23:42:34.919696 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 8 23:42:34.919707 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 8 23:42:34.919720 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 8 23:42:34.919730 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 8 23:42:34.919740 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 8 23:42:34.919751 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 8 23:42:34.919761 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 8 23:42:34.919771 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 8 23:42:34.919781 systemd[1]: Mounting media.mount - External Media Directory... Sep 8 23:42:34.919791 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 8 23:42:34.919801 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 8 23:42:34.919812 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 8 23:42:34.919824 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 8 23:42:34.919834 systemd[1]: Reached target machines.target - Containers. Sep 8 23:42:34.919844 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 8 23:42:34.919854 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:42:34.919864 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 8 23:42:34.919874 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 8 23:42:34.919884 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:42:34.919894 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:42:34.919906 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:42:34.919916 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 8 23:42:34.919927 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:42:34.919937 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 8 23:42:34.919947 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 8 23:42:34.919973 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 8 23:42:34.919984 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 8 23:42:34.919995 systemd[1]: Stopped systemd-fsck-usr.service. Sep 8 23:42:34.920006 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:42:34.920021 kernel: fuse: init (API version 7.41) Sep 8 23:42:34.920030 kernel: ACPI: bus type drm_connector registered Sep 8 23:42:34.920040 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 8 23:42:34.920050 kernel: loop: module loaded Sep 8 23:42:34.920059 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 8 23:42:34.920069 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 8 23:42:34.920079 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 8 23:42:34.920090 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 8 23:42:34.920102 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 8 23:42:34.920112 systemd[1]: verity-setup.service: Deactivated successfully. Sep 8 23:42:34.920122 systemd[1]: Stopped verity-setup.service. Sep 8 23:42:34.920152 systemd-journald[1157]: Collecting audit messages is disabled. Sep 8 23:42:34.920185 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 8 23:42:34.920196 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 8 23:42:34.920206 systemd-journald[1157]: Journal started Sep 8 23:42:34.920226 systemd-journald[1157]: Runtime Journal (/run/log/journal/57fec169290f4acc9d9f02624e1fcad0) is 6M, max 48.5M, 42.4M free. Sep 8 23:42:34.715414 systemd[1]: Queued start job for default target multi-user.target. Sep 8 23:42:34.735276 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 8 23:42:34.735691 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 8 23:42:34.923202 systemd[1]: Started systemd-journald.service - Journal Service. Sep 8 23:42:34.923801 systemd[1]: Mounted media.mount - External Media Directory. Sep 8 23:42:34.925461 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 8 23:42:34.926525 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 8 23:42:34.927517 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 8 23:42:34.928581 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 8 23:42:34.931214 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 8 23:42:34.932554 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 8 23:42:34.932815 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 8 23:42:34.934289 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:42:34.934450 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:42:34.935521 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:42:34.935687 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:42:34.936882 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:42:34.937051 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:42:34.938326 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 8 23:42:34.938504 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 8 23:42:34.940561 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:42:34.940731 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:42:34.941949 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 8 23:42:34.943151 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 8 23:42:34.944431 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 8 23:42:34.945731 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 8 23:42:34.957943 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 8 23:42:34.960352 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 8 23:42:34.962108 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 8 23:42:34.963130 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 8 23:42:34.963178 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 8 23:42:34.964938 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 8 23:42:34.968937 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 8 23:42:34.970197 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:42:34.971343 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 8 23:42:34.973007 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 8 23:42:34.974114 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:42:34.976303 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 8 23:42:34.977222 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:42:34.978146 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:42:34.980549 systemd-journald[1157]: Time spent on flushing to /var/log/journal/57fec169290f4acc9d9f02624e1fcad0 is 23.379ms for 887 entries. Sep 8 23:42:34.980549 systemd-journald[1157]: System Journal (/var/log/journal/57fec169290f4acc9d9f02624e1fcad0) is 8M, max 195.6M, 187.6M free. Sep 8 23:42:35.010464 systemd-journald[1157]: Received client request to flush runtime journal. Sep 8 23:42:35.010504 kernel: loop0: detected capacity change from 0 to 211168 Sep 8 23:42:34.982466 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 8 23:42:34.985410 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 8 23:42:34.990497 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 8 23:42:34.991894 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 8 23:42:34.992993 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 8 23:42:35.008598 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 8 23:42:35.009846 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 8 23:42:35.014190 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 8 23:42:35.016737 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 8 23:42:35.018895 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 8 23:42:35.020532 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:42:35.034859 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 8 23:42:35.038948 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 8 23:42:35.040180 kernel: loop1: detected capacity change from 0 to 138376 Sep 8 23:42:35.052825 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 8 23:42:35.069175 kernel: loop2: detected capacity change from 0 to 107312 Sep 8 23:42:35.068624 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Sep 8 23:42:35.068646 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Sep 8 23:42:35.074198 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 8 23:42:35.099186 kernel: loop3: detected capacity change from 0 to 211168 Sep 8 23:42:35.106184 kernel: loop4: detected capacity change from 0 to 138376 Sep 8 23:42:35.113189 kernel: loop5: detected capacity change from 0 to 107312 Sep 8 23:42:35.116798 (sd-merge)[1221]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 8 23:42:35.117214 (sd-merge)[1221]: Merged extensions into '/usr'. Sep 8 23:42:35.121126 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Sep 8 23:42:35.121141 systemd[1]: Reloading... Sep 8 23:42:35.180189 zram_generator::config[1247]: No configuration found. Sep 8 23:42:35.269333 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 8 23:42:35.273217 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:42:35.350438 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 8 23:42:35.350607 systemd[1]: Reloading finished in 229 ms. Sep 8 23:42:35.390944 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 8 23:42:35.392481 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 8 23:42:35.403505 systemd[1]: Starting ensure-sysext.service... Sep 8 23:42:35.405238 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 8 23:42:35.427853 systemd[1]: Reload requested from client PID 1281 ('systemctl') (unit ensure-sysext.service)... Sep 8 23:42:35.427873 systemd[1]: Reloading... Sep 8 23:42:35.427975 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 8 23:42:35.428001 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 8 23:42:35.428280 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 8 23:42:35.428477 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 8 23:42:35.429098 systemd-tmpfiles[1282]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 8 23:42:35.429328 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Sep 8 23:42:35.429379 systemd-tmpfiles[1282]: ACLs are not supported, ignoring. Sep 8 23:42:35.433050 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:42:35.433062 systemd-tmpfiles[1282]: Skipping /boot Sep 8 23:42:35.442321 systemd-tmpfiles[1282]: Detected autofs mount point /boot during canonicalization of boot. Sep 8 23:42:35.442333 systemd-tmpfiles[1282]: Skipping /boot Sep 8 23:42:35.478233 zram_generator::config[1312]: No configuration found. Sep 8 23:42:35.545965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:42:35.620962 systemd[1]: Reloading finished in 192 ms. Sep 8 23:42:35.642718 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 8 23:42:35.648128 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 8 23:42:35.658220 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:42:35.660263 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 8 23:42:35.667420 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 8 23:42:35.672948 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 8 23:42:35.675368 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 8 23:42:35.680067 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 8 23:42:35.686025 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:42:35.694421 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:42:35.698367 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:42:35.701371 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:42:35.702211 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:42:35.702330 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:42:35.703863 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 8 23:42:35.707192 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 8 23:42:35.708528 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 8 23:42:35.710007 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:42:35.710143 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:42:35.711579 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:42:35.711725 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:42:35.713222 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:42:35.713358 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:42:35.722019 systemd-udevd[1350]: Using default interface naming scheme 'v255'. Sep 8 23:42:35.722586 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:42:35.723838 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:42:35.727385 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:42:35.729181 augenrules[1381]: No rules Sep 8 23:42:35.739993 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:42:35.740925 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:42:35.741039 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:42:35.742945 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 8 23:42:35.743798 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 8 23:42:35.746028 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 8 23:42:35.747391 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 8 23:42:35.749958 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:42:35.750185 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:42:35.751580 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 8 23:42:35.753652 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:42:35.753841 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:42:35.758406 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:42:35.758608 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:42:35.760621 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:42:35.760806 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:42:35.784327 systemd[1]: Finished ensure-sysext.service. Sep 8 23:42:35.785456 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 8 23:42:35.795207 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:42:35.796034 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 8 23:42:35.796965 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 8 23:42:35.800569 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 8 23:42:35.806376 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 8 23:42:35.808248 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 8 23:42:35.809747 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 8 23:42:35.809788 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 8 23:42:35.811304 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 8 23:42:35.816311 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 8 23:42:35.817134 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 8 23:42:35.817600 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 8 23:42:35.817799 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 8 23:42:35.820737 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 8 23:42:35.820895 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 8 23:42:35.822165 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 8 23:42:35.822322 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 8 23:42:35.825790 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 8 23:42:35.831298 augenrules[1428]: /sbin/augenrules: No change Sep 8 23:42:35.838509 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 8 23:42:35.838677 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 8 23:42:35.840175 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 8 23:42:35.847806 augenrules[1456]: No rules Sep 8 23:42:35.849255 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:42:35.849437 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:42:35.902756 systemd-resolved[1348]: Positive Trust Anchors: Sep 8 23:42:35.902774 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 8 23:42:35.902806 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 8 23:42:35.908653 systemd-resolved[1348]: Defaulting to hostname 'linux'. Sep 8 23:42:35.910014 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 8 23:42:35.912269 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 8 23:42:35.913214 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 8 23:42:35.921506 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 8 23:42:35.922934 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 8 23:42:35.924123 systemd[1]: Reached target sysinit.target - System Initialization. Sep 8 23:42:35.925230 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 8 23:42:35.925837 systemd-networkd[1434]: lo: Link UP Sep 8 23:42:35.925849 systemd-networkd[1434]: lo: Gained carrier Sep 8 23:42:35.926284 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 8 23:42:35.926654 systemd-networkd[1434]: Enumeration completed Sep 8 23:42:35.927035 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:42:35.927042 systemd-networkd[1434]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 8 23:42:35.927510 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 8 23:42:35.927557 systemd-networkd[1434]: eth0: Link UP Sep 8 23:42:35.927682 systemd-networkd[1434]: eth0: Gained carrier Sep 8 23:42:35.927695 systemd-networkd[1434]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 8 23:42:35.928496 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 8 23:42:35.928528 systemd[1]: Reached target paths.target - Path Units. Sep 8 23:42:35.929229 systemd[1]: Reached target time-set.target - System Time Set. Sep 8 23:42:35.930319 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 8 23:42:35.931226 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 8 23:42:35.932113 systemd[1]: Reached target timers.target - Timer Units. Sep 8 23:42:35.933940 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 8 23:42:35.936099 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 8 23:42:35.939135 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 8 23:42:35.939201 systemd-networkd[1434]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 8 23:42:35.939831 systemd-timesyncd[1437]: Network configuration changed, trying to establish connection. Sep 8 23:42:35.940248 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 8 23:42:35.941034 systemd-timesyncd[1437]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 8 23:42:35.941133 systemd-timesyncd[1437]: Initial clock synchronization to Mon 2025-09-08 23:42:35.790740 UTC. Sep 8 23:42:35.941376 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 8 23:42:35.943891 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 8 23:42:35.945087 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 8 23:42:35.947008 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 8 23:42:35.948352 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 8 23:42:35.949395 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 8 23:42:35.950259 systemd[1]: Reached target network.target - Network. Sep 8 23:42:35.950927 systemd[1]: Reached target sockets.target - Socket Units. Sep 8 23:42:35.951823 systemd[1]: Reached target basic.target - Basic System. Sep 8 23:42:35.952601 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:42:35.952635 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 8 23:42:35.953444 systemd[1]: Starting containerd.service - containerd container runtime... Sep 8 23:42:35.955271 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 8 23:42:35.957869 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 8 23:42:35.960345 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 8 23:42:35.962065 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 8 23:42:35.963267 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 8 23:42:35.964362 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 8 23:42:35.967037 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 8 23:42:35.968764 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 8 23:42:35.970465 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 8 23:42:35.975968 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 8 23:42:35.978342 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 8 23:42:35.980091 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 8 23:42:35.981968 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 8 23:42:35.982363 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 8 23:42:35.982896 systemd[1]: Starting update-engine.service - Update Engine... Sep 8 23:42:35.983331 jq[1480]: false Sep 8 23:42:35.984967 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 8 23:42:35.997354 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 8 23:42:36.000631 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 8 23:42:36.001858 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 8 23:42:36.002014 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 8 23:42:36.004011 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 8 23:42:36.004223 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 8 23:42:36.013471 extend-filesystems[1481]: Found /dev/vda6 Sep 8 23:42:36.015859 systemd[1]: motdgen.service: Deactivated successfully. Sep 8 23:42:36.017193 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 8 23:42:36.025670 dbus-daemon[1475]: [system] SELinux support is enabled Sep 8 23:42:36.026068 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 8 23:42:36.028960 update_engine[1499]: I20250908 23:42:36.028422 1499 main.cc:92] Flatcar Update Engine starting Sep 8 23:42:36.029027 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 8 23:42:36.029048 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 8 23:42:36.030886 jq[1500]: true Sep 8 23:42:36.030964 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 8 23:42:36.030979 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 8 23:42:36.032524 update_engine[1499]: I20250908 23:42:36.032480 1499 update_check_scheduler.cc:74] Next update check in 4m41s Sep 8 23:42:36.034319 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 8 23:42:36.038863 systemd[1]: Started update-engine.service - Update Engine. Sep 8 23:42:36.041317 extend-filesystems[1481]: Found /dev/vda9 Sep 8 23:42:36.043626 extend-filesystems[1481]: Checking size of /dev/vda9 Sep 8 23:42:36.047103 (ntainerd)[1521]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 8 23:42:36.047446 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 8 23:42:36.056236 extend-filesystems[1481]: Resized partition /dev/vda9 Sep 8 23:42:36.059091 jq[1529]: true Sep 8 23:42:36.059716 extend-filesystems[1536]: resize2fs 1.47.2 (1-Jan-2025) Sep 8 23:42:36.061995 tar[1519]: linux-arm64/LICENSE Sep 8 23:42:36.062218 tar[1519]: linux-arm64/helm Sep 8 23:42:36.075181 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 8 23:42:36.085454 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 8 23:42:36.100171 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 8 23:42:36.114433 extend-filesystems[1536]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 8 23:42:36.114433 extend-filesystems[1536]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 8 23:42:36.114433 extend-filesystems[1536]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 8 23:42:36.118114 extend-filesystems[1481]: Resized filesystem in /dev/vda9 Sep 8 23:42:36.117867 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 8 23:42:36.118059 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 8 23:42:36.128089 locksmithd[1531]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 8 23:42:36.129087 bash[1560]: Updated "/home/core/.ssh/authorized_keys" Sep 8 23:42:36.132360 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 8 23:42:36.133866 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 8 23:42:36.184418 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 8 23:42:36.192047 systemd-logind[1491]: Watching system buttons on /dev/input/event0 (Power Button) Sep 8 23:42:36.192280 systemd-logind[1491]: New seat seat0. Sep 8 23:42:36.193533 systemd[1]: Started systemd-logind.service - User Login Management. Sep 8 23:42:36.265570 containerd[1521]: time="2025-09-08T23:42:36Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 8 23:42:36.267363 containerd[1521]: time="2025-09-08T23:42:36.267322737Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Sep 8 23:42:36.283535 containerd[1521]: time="2025-09-08T23:42:36.283494166Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.538µs" Sep 8 23:42:36.283535 containerd[1521]: time="2025-09-08T23:42:36.283528273Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 8 23:42:36.283636 containerd[1521]: time="2025-09-08T23:42:36.283545739Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 8 23:42:36.283703 containerd[1521]: time="2025-09-08T23:42:36.283680598Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 8 23:42:36.283737 containerd[1521]: time="2025-09-08T23:42:36.283702028Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 8 23:42:36.283737 containerd[1521]: time="2025-09-08T23:42:36.283727657Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 8 23:42:36.283792 containerd[1521]: time="2025-09-08T23:42:36.283775188Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 8 23:42:36.283792 containerd[1521]: time="2025-09-08T23:42:36.283789239Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 8 23:42:36.284095 containerd[1521]: time="2025-09-08T23:42:36.284066689Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 8 23:42:36.284095 containerd[1521]: time="2025-09-08T23:42:36.284092279Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 8 23:42:36.284148 containerd[1521]: time="2025-09-08T23:42:36.284104328Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 8 23:42:36.284148 containerd[1521]: time="2025-09-08T23:42:36.284112688Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 8 23:42:36.284237 containerd[1521]: time="2025-09-08T23:42:36.284218700Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 8 23:42:36.284592 containerd[1521]: time="2025-09-08T23:42:36.284565031Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 8 23:42:36.284622 containerd[1521]: time="2025-09-08T23:42:36.284606753Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 8 23:42:36.284622 containerd[1521]: time="2025-09-08T23:42:36.284618057Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 8 23:42:36.284677 containerd[1521]: time="2025-09-08T23:42:36.284662290Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 8 23:42:36.285050 containerd[1521]: time="2025-09-08T23:42:36.285022595Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 8 23:42:36.285134 containerd[1521]: time="2025-09-08T23:42:36.285097638Z" level=info msg="metadata content store policy set" policy=shared Sep 8 23:42:36.289418 containerd[1521]: time="2025-09-08T23:42:36.289377137Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 8 23:42:36.289473 containerd[1521]: time="2025-09-08T23:42:36.289433380Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 8 23:42:36.289473 containerd[1521]: time="2025-09-08T23:42:36.289448491Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 8 23:42:36.289473 containerd[1521]: time="2025-09-08T23:42:36.289462111Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 8 23:42:36.289541 containerd[1521]: time="2025-09-08T23:42:36.289474238Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 8 23:42:36.289541 containerd[1521]: time="2025-09-08T23:42:36.289484757Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 8 23:42:36.289541 containerd[1521]: time="2025-09-08T23:42:36.289496061Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 8 23:42:36.289541 containerd[1521]: time="2025-09-08T23:42:36.289507168Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 8 23:42:36.289541 containerd[1521]: time="2025-09-08T23:42:36.289519061Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 8 23:42:36.289541 containerd[1521]: time="2025-09-08T23:42:36.289529305Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 8 23:42:36.289541 containerd[1521]: time="2025-09-08T23:42:36.289538410Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 8 23:42:36.289648 containerd[1521]: time="2025-09-08T23:42:36.289550617Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 8 23:42:36.289696 containerd[1521]: time="2025-09-08T23:42:36.289673858Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 8 23:42:36.289719 containerd[1521]: time="2025-09-08T23:42:36.289701175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 8 23:42:36.289719 containerd[1521]: time="2025-09-08T23:42:36.289715737Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 8 23:42:36.289821 containerd[1521]: time="2025-09-08T23:42:36.289806912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 8 23:42:36.289821 containerd[1521]: time="2025-09-08T23:42:36.289821041Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 8 23:42:36.289868 containerd[1521]: time="2025-09-08T23:42:36.289831364Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 8 23:42:36.289868 containerd[1521]: time="2025-09-08T23:42:36.289843374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 8 23:42:36.289868 containerd[1521]: time="2025-09-08T23:42:36.289853186Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 8 23:42:36.289923 containerd[1521]: time="2025-09-08T23:42:36.289867276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 8 23:42:36.289923 containerd[1521]: time="2025-09-08T23:42:36.289878580Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 8 23:42:36.289923 containerd[1521]: time="2025-09-08T23:42:36.289889805Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 8 23:42:36.290084 containerd[1521]: time="2025-09-08T23:42:36.290062539Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 8 23:42:36.290110 containerd[1521]: time="2025-09-08T23:42:36.290087031Z" level=info msg="Start snapshots syncer" Sep 8 23:42:36.290128 containerd[1521]: time="2025-09-08T23:42:36.290113367Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 8 23:42:36.290375 containerd[1521]: time="2025-09-08T23:42:36.290338420Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 8 23:42:36.290481 containerd[1521]: time="2025-09-08T23:42:36.290389914Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 8 23:42:36.290481 containerd[1521]: time="2025-09-08T23:42:36.290469354Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 8 23:42:36.290593 containerd[1521]: time="2025-09-08T23:42:36.290571479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 8 23:42:36.290618 containerd[1521]: time="2025-09-08T23:42:36.290599856Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 8 23:42:36.290618 containerd[1521]: time="2025-09-08T23:42:36.290612730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 8 23:42:36.290651 containerd[1521]: time="2025-09-08T23:42:36.290630274Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 8 23:42:36.290651 containerd[1521]: time="2025-09-08T23:42:36.290642127Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 8 23:42:36.290689 containerd[1521]: time="2025-09-08T23:42:36.290652764Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 8 23:42:36.290689 containerd[1521]: time="2025-09-08T23:42:36.290663400Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 8 23:42:36.290722 containerd[1521]: time="2025-09-08T23:42:36.290687263Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 8 23:42:36.290722 containerd[1521]: time="2025-09-08T23:42:36.290698685Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 8 23:42:36.290722 containerd[1521]: time="2025-09-08T23:42:36.290709478Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 8 23:42:36.290782 containerd[1521]: time="2025-09-08T23:42:36.290757244Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 8 23:42:36.290782 containerd[1521]: time="2025-09-08T23:42:36.290773218Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 8 23:42:36.290815 containerd[1521]: time="2025-09-08T23:42:36.290781853Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 8 23:42:36.290815 containerd[1521]: time="2025-09-08T23:42:36.290791116Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 8 23:42:36.290815 containerd[1521]: time="2025-09-08T23:42:36.290799280Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 8 23:42:36.290815 containerd[1521]: time="2025-09-08T23:42:36.290809445Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 8 23:42:36.290881 containerd[1521]: time="2025-09-08T23:42:36.290820121Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 8 23:42:36.290900 containerd[1521]: time="2025-09-08T23:42:36.290894380Z" level=info msg="runtime interface created" Sep 8 23:42:36.290917 containerd[1521]: time="2025-09-08T23:42:36.290901130Z" level=info msg="created NRI interface" Sep 8 23:42:36.290917 containerd[1521]: time="2025-09-08T23:42:36.290913062Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 8 23:42:36.290948 containerd[1521]: time="2025-09-08T23:42:36.290924209Z" level=info msg="Connect containerd service" Sep 8 23:42:36.290965 containerd[1521]: time="2025-09-08T23:42:36.290951173Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 8 23:42:36.291681 containerd[1521]: time="2025-09-08T23:42:36.291650077Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:42:36.369892 containerd[1521]: time="2025-09-08T23:42:36.369717257Z" level=info msg="Start subscribing containerd event" Sep 8 23:42:36.369892 containerd[1521]: time="2025-09-08T23:42:36.369846974Z" level=info msg="Start recovering state" Sep 8 23:42:36.370034 containerd[1521]: time="2025-09-08T23:42:36.369984895Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 8 23:42:36.370069 containerd[1521]: time="2025-09-08T23:42:36.370049499Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 8 23:42:36.370092 containerd[1521]: time="2025-09-08T23:42:36.369991881Z" level=info msg="Start event monitor" Sep 8 23:42:36.370110 containerd[1521]: time="2025-09-08T23:42:36.370093653Z" level=info msg="Start cni network conf syncer for default" Sep 8 23:42:36.370110 containerd[1521]: time="2025-09-08T23:42:36.370101739Z" level=info msg="Start streaming server" Sep 8 23:42:36.370166 containerd[1521]: time="2025-09-08T23:42:36.370109863Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 8 23:42:36.370166 containerd[1521]: time="2025-09-08T23:42:36.370116732Z" level=info msg="runtime interface starting up..." Sep 8 23:42:36.370271 containerd[1521]: time="2025-09-08T23:42:36.370122148Z" level=info msg="starting plugins..." Sep 8 23:42:36.370307 containerd[1521]: time="2025-09-08T23:42:36.370279654Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 8 23:42:36.370632 systemd[1]: Started containerd.service - containerd container runtime. Sep 8 23:42:36.371315 containerd[1521]: time="2025-09-08T23:42:36.371291921Z" level=info msg="containerd successfully booted in 0.106221s" Sep 8 23:42:36.487499 tar[1519]: linux-arm64/README.md Sep 8 23:42:36.502249 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 8 23:42:36.906263 sshd_keygen[1524]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 8 23:42:36.926253 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 8 23:42:36.930503 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 8 23:42:36.949862 systemd[1]: issuegen.service: Deactivated successfully. Sep 8 23:42:36.950116 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 8 23:42:36.952759 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 8 23:42:36.974679 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 8 23:42:36.977393 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 8 23:42:36.979430 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 8 23:42:36.980483 systemd[1]: Reached target getty.target - Login Prompts. Sep 8 23:42:37.983324 systemd-networkd[1434]: eth0: Gained IPv6LL Sep 8 23:42:37.985789 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 8 23:42:37.987351 systemd[1]: Reached target network-online.target - Network is Online. Sep 8 23:42:37.989596 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 8 23:42:37.991836 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:42:38.005422 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 8 23:42:38.023135 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 8 23:42:38.025170 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 8 23:42:38.025435 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 8 23:42:38.027625 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 8 23:42:38.537505 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:42:38.538819 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 8 23:42:38.542826 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:42:38.545491 systemd[1]: Startup finished in 1.989s (kernel) + 5.741s (initrd) + 4.254s (userspace) = 11.985s. Sep 8 23:42:38.884150 kubelet[1634]: E0908 23:42:38.884017 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:42:38.886475 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:42:38.886613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:42:38.886916 systemd[1]: kubelet.service: Consumed 752ms CPU time, 258.6M memory peak. Sep 8 23:42:41.091061 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 8 23:42:41.092354 systemd[1]: Started sshd@0-10.0.0.77:22-10.0.0.1:35716.service - OpenSSH per-connection server daemon (10.0.0.1:35716). Sep 8 23:42:41.175513 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 35716 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:42:41.177500 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:42:41.187882 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 8 23:42:41.189677 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 8 23:42:41.192738 systemd-logind[1491]: New session 1 of user core. Sep 8 23:42:41.214232 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 8 23:42:41.217510 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 8 23:42:41.235669 (systemd)[1651]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 8 23:42:41.238204 systemd-logind[1491]: New session c1 of user core. Sep 8 23:42:41.365318 systemd[1651]: Queued start job for default target default.target. Sep 8 23:42:41.380994 systemd[1651]: Created slice app.slice - User Application Slice. Sep 8 23:42:41.381041 systemd[1651]: Reached target paths.target - Paths. Sep 8 23:42:41.381083 systemd[1651]: Reached target timers.target - Timers. Sep 8 23:42:41.385762 systemd[1651]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 8 23:42:41.399481 systemd[1651]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 8 23:42:41.399558 systemd[1651]: Reached target sockets.target - Sockets. Sep 8 23:42:41.399604 systemd[1651]: Reached target basic.target - Basic System. Sep 8 23:42:41.399632 systemd[1651]: Reached target default.target - Main User Target. Sep 8 23:42:41.399659 systemd[1651]: Startup finished in 152ms. Sep 8 23:42:41.399826 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 8 23:42:41.401290 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 8 23:42:41.463142 systemd[1]: Started sshd@1-10.0.0.77:22-10.0.0.1:35720.service - OpenSSH per-connection server daemon (10.0.0.1:35720). Sep 8 23:42:41.531050 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 35720 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:42:41.532470 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:42:41.537261 systemd-logind[1491]: New session 2 of user core. Sep 8 23:42:41.548361 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 8 23:42:41.600228 sshd[1664]: Connection closed by 10.0.0.1 port 35720 Sep 8 23:42:41.600758 sshd-session[1662]: pam_unix(sshd:session): session closed for user core Sep 8 23:42:41.617687 systemd[1]: sshd@1-10.0.0.77:22-10.0.0.1:35720.service: Deactivated successfully. Sep 8 23:42:41.620413 systemd[1]: session-2.scope: Deactivated successfully. Sep 8 23:42:41.621483 systemd-logind[1491]: Session 2 logged out. Waiting for processes to exit. Sep 8 23:42:41.623759 systemd-logind[1491]: Removed session 2. Sep 8 23:42:41.625740 systemd[1]: Started sshd@2-10.0.0.77:22-10.0.0.1:35726.service - OpenSSH per-connection server daemon (10.0.0.1:35726). Sep 8 23:42:41.678873 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 35726 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:42:41.680580 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:42:41.685222 systemd-logind[1491]: New session 3 of user core. Sep 8 23:42:41.699349 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 8 23:42:41.748127 sshd[1672]: Connection closed by 10.0.0.1 port 35726 Sep 8 23:42:41.748588 sshd-session[1670]: pam_unix(sshd:session): session closed for user core Sep 8 23:42:41.758593 systemd[1]: sshd@2-10.0.0.77:22-10.0.0.1:35726.service: Deactivated successfully. Sep 8 23:42:41.761632 systemd[1]: session-3.scope: Deactivated successfully. Sep 8 23:42:41.762349 systemd-logind[1491]: Session 3 logged out. Waiting for processes to exit. Sep 8 23:42:41.764714 systemd[1]: Started sshd@3-10.0.0.77:22-10.0.0.1:35742.service - OpenSSH per-connection server daemon (10.0.0.1:35742). Sep 8 23:42:41.765707 systemd-logind[1491]: Removed session 3. Sep 8 23:42:41.815827 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 35742 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:42:41.817441 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:42:41.821437 systemd-logind[1491]: New session 4 of user core. Sep 8 23:42:41.828366 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 8 23:42:41.880274 sshd[1680]: Connection closed by 10.0.0.1 port 35742 Sep 8 23:42:41.880507 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Sep 8 23:42:41.895651 systemd[1]: sshd@3-10.0.0.77:22-10.0.0.1:35742.service: Deactivated successfully. Sep 8 23:42:41.898692 systemd[1]: session-4.scope: Deactivated successfully. Sep 8 23:42:41.899456 systemd-logind[1491]: Session 4 logged out. Waiting for processes to exit. Sep 8 23:42:41.902423 systemd[1]: Started sshd@4-10.0.0.77:22-10.0.0.1:35758.service - OpenSSH per-connection server daemon (10.0.0.1:35758). Sep 8 23:42:41.903093 systemd-logind[1491]: Removed session 4. Sep 8 23:42:41.956894 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 35758 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:42:41.958326 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:42:41.964104 systemd-logind[1491]: New session 5 of user core. Sep 8 23:42:41.970383 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 8 23:42:42.029506 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 8 23:42:42.029785 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:42:42.044932 sudo[1689]: pam_unix(sudo:session): session closed for user root Sep 8 23:42:42.046564 sshd[1688]: Connection closed by 10.0.0.1 port 35758 Sep 8 23:42:42.047150 sshd-session[1686]: pam_unix(sshd:session): session closed for user core Sep 8 23:42:42.061694 systemd[1]: sshd@4-10.0.0.77:22-10.0.0.1:35758.service: Deactivated successfully. Sep 8 23:42:42.064410 systemd[1]: session-5.scope: Deactivated successfully. Sep 8 23:42:42.065452 systemd-logind[1491]: Session 5 logged out. Waiting for processes to exit. Sep 8 23:42:42.070703 systemd[1]: Started sshd@5-10.0.0.77:22-10.0.0.1:35774.service - OpenSSH per-connection server daemon (10.0.0.1:35774). Sep 8 23:42:42.072021 systemd-logind[1491]: Removed session 5. Sep 8 23:42:42.119677 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 35774 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:42:42.121131 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:42:42.125299 systemd-logind[1491]: New session 6 of user core. Sep 8 23:42:42.133346 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 8 23:42:42.186116 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 8 23:42:42.186425 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:42:42.212224 sudo[1699]: pam_unix(sudo:session): session closed for user root Sep 8 23:42:42.217424 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 8 23:42:42.217718 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:42:42.227466 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 8 23:42:42.275466 augenrules[1721]: No rules Sep 8 23:42:42.277140 systemd[1]: audit-rules.service: Deactivated successfully. Sep 8 23:42:42.277387 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 8 23:42:42.278552 sudo[1698]: pam_unix(sudo:session): session closed for user root Sep 8 23:42:42.280640 sshd[1697]: Connection closed by 10.0.0.1 port 35774 Sep 8 23:42:42.280774 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Sep 8 23:42:42.293785 systemd[1]: sshd@5-10.0.0.77:22-10.0.0.1:35774.service: Deactivated successfully. Sep 8 23:42:42.295458 systemd[1]: session-6.scope: Deactivated successfully. Sep 8 23:42:42.296798 systemd-logind[1491]: Session 6 logged out. Waiting for processes to exit. Sep 8 23:42:42.299410 systemd[1]: Started sshd@6-10.0.0.77:22-10.0.0.1:35788.service - OpenSSH per-connection server daemon (10.0.0.1:35788). Sep 8 23:42:42.300195 systemd-logind[1491]: Removed session 6. Sep 8 23:42:42.354503 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 35788 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:42:42.355880 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:42:42.360231 systemd-logind[1491]: New session 7 of user core. Sep 8 23:42:42.373354 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 8 23:42:42.424740 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 8 23:42:42.425719 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 8 23:42:42.732278 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 8 23:42:42.750532 (dockerd)[1753]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 8 23:42:42.971549 dockerd[1753]: time="2025-09-08T23:42:42.971479221Z" level=info msg="Starting up" Sep 8 23:42:42.973366 dockerd[1753]: time="2025-09-08T23:42:42.973331557Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 8 23:42:43.001969 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport917237385-merged.mount: Deactivated successfully. Sep 8 23:42:43.120668 dockerd[1753]: time="2025-09-08T23:42:43.120613480Z" level=info msg="Loading containers: start." Sep 8 23:42:43.130185 kernel: Initializing XFRM netlink socket Sep 8 23:42:43.331994 systemd-networkd[1434]: docker0: Link UP Sep 8 23:42:43.337830 dockerd[1753]: time="2025-09-08T23:42:43.337773770Z" level=info msg="Loading containers: done." Sep 8 23:42:43.352200 dockerd[1753]: time="2025-09-08T23:42:43.351884540Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 8 23:42:43.352200 dockerd[1753]: time="2025-09-08T23:42:43.351991584Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Sep 8 23:42:43.352200 dockerd[1753]: time="2025-09-08T23:42:43.352111772Z" level=info msg="Initializing buildkit" Sep 8 23:42:43.374798 dockerd[1753]: time="2025-09-08T23:42:43.374753126Z" level=info msg="Completed buildkit initialization" Sep 8 23:42:43.382611 dockerd[1753]: time="2025-09-08T23:42:43.382558936Z" level=info msg="Daemon has completed initialization" Sep 8 23:42:43.382997 dockerd[1753]: time="2025-09-08T23:42:43.382732288Z" level=info msg="API listen on /run/docker.sock" Sep 8 23:42:43.382843 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 8 23:42:43.933104 containerd[1521]: time="2025-09-08T23:42:43.933062062Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 8 23:42:43.999882 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2343627144-merged.mount: Deactivated successfully. Sep 8 23:42:44.499454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2327667913.mount: Deactivated successfully. Sep 8 23:42:45.571346 containerd[1521]: time="2025-09-08T23:42:45.571247274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:45.572245 containerd[1521]: time="2025-09-08T23:42:45.572178189Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 8 23:42:45.572684 containerd[1521]: time="2025-09-08T23:42:45.572648160Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:45.575852 containerd[1521]: time="2025-09-08T23:42:45.575795808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:45.576462 containerd[1521]: time="2025-09-08T23:42:45.576422887Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.643316616s" Sep 8 23:42:45.576527 containerd[1521]: time="2025-09-08T23:42:45.576464014Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 8 23:42:45.577858 containerd[1521]: time="2025-09-08T23:42:45.577826477Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 8 23:42:46.641430 containerd[1521]: time="2025-09-08T23:42:46.641363088Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:46.641935 containerd[1521]: time="2025-09-08T23:42:46.641901575Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 8 23:42:46.644078 containerd[1521]: time="2025-09-08T23:42:46.644008992Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:46.646019 containerd[1521]: time="2025-09-08T23:42:46.645957637Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:46.647197 containerd[1521]: time="2025-09-08T23:42:46.647149162Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.069268465s" Sep 8 23:42:46.647197 containerd[1521]: time="2025-09-08T23:42:46.647197959Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 8 23:42:46.647860 containerd[1521]: time="2025-09-08T23:42:46.647609795Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 8 23:42:47.998311 containerd[1521]: time="2025-09-08T23:42:47.998230322Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:47.998879 containerd[1521]: time="2025-09-08T23:42:47.998837486Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 8 23:42:47.999625 containerd[1521]: time="2025-09-08T23:42:47.999588028Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:48.002214 containerd[1521]: time="2025-09-08T23:42:48.002146897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:48.003299 containerd[1521]: time="2025-09-08T23:42:48.003256125Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.355611967s" Sep 8 23:42:48.003369 containerd[1521]: time="2025-09-08T23:42:48.003305059Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 8 23:42:48.004058 containerd[1521]: time="2025-09-08T23:42:48.003795477Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 8 23:42:49.084550 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1477326800.mount: Deactivated successfully. Sep 8 23:42:49.086120 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 8 23:42:49.087915 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:42:49.301621 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:42:49.331678 (kubelet)[2042]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 8 23:42:49.374127 kubelet[2042]: E0908 23:42:49.374010 2042 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 8 23:42:49.378057 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 8 23:42:49.378239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 8 23:42:49.378974 systemd[1]: kubelet.service: Consumed 165ms CPU time, 108.1M memory peak. Sep 8 23:42:49.622142 containerd[1521]: time="2025-09-08T23:42:49.622084109Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:49.622942 containerd[1521]: time="2025-09-08T23:42:49.622901913Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 8 23:42:49.623713 containerd[1521]: time="2025-09-08T23:42:49.623680328Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:49.625980 containerd[1521]: time="2025-09-08T23:42:49.625832263Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:49.626615 containerd[1521]: time="2025-09-08T23:42:49.626382954Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.622547742s" Sep 8 23:42:49.626615 containerd[1521]: time="2025-09-08T23:42:49.626421865Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 8 23:42:49.626934 containerd[1521]: time="2025-09-08T23:42:49.626890908Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 8 23:42:50.159869 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1399227786.mount: Deactivated successfully. Sep 8 23:42:50.969192 containerd[1521]: time="2025-09-08T23:42:50.969126516Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:50.969883 containerd[1521]: time="2025-09-08T23:42:50.969846226Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 8 23:42:50.970718 containerd[1521]: time="2025-09-08T23:42:50.970685867Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:50.973284 containerd[1521]: time="2025-09-08T23:42:50.973202279Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:50.974316 containerd[1521]: time="2025-09-08T23:42:50.974279829Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.347353833s" Sep 8 23:42:50.974444 containerd[1521]: time="2025-09-08T23:42:50.974391146Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 8 23:42:50.975002 containerd[1521]: time="2025-09-08T23:42:50.974977443Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 8 23:42:51.434231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2248594206.mount: Deactivated successfully. Sep 8 23:42:51.439313 containerd[1521]: time="2025-09-08T23:42:51.439262374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:42:51.439634 containerd[1521]: time="2025-09-08T23:42:51.439603348Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 8 23:42:51.440624 containerd[1521]: time="2025-09-08T23:42:51.440561274Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:42:51.442693 containerd[1521]: time="2025-09-08T23:42:51.442635565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 8 23:42:51.443240 containerd[1521]: time="2025-09-08T23:42:51.443205038Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 468.192893ms" Sep 8 23:42:51.443240 containerd[1521]: time="2025-09-08T23:42:51.443239351Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 8 23:42:51.444405 containerd[1521]: time="2025-09-08T23:42:51.444184071Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 8 23:42:51.888456 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1423026843.mount: Deactivated successfully. Sep 8 23:42:53.520393 containerd[1521]: time="2025-09-08T23:42:53.520333537Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:53.521338 containerd[1521]: time="2025-09-08T23:42:53.521311716Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 8 23:42:53.522067 containerd[1521]: time="2025-09-08T23:42:53.522026047Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:53.528204 containerd[1521]: time="2025-09-08T23:42:53.528130981Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:42:53.529978 containerd[1521]: time="2025-09-08T23:42:53.529936032Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.08571872s" Sep 8 23:42:53.530033 containerd[1521]: time="2025-09-08T23:42:53.529976433Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 8 23:42:58.806729 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:42:58.806901 systemd[1]: kubelet.service: Consumed 165ms CPU time, 108.1M memory peak. Sep 8 23:42:58.809015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:42:58.834383 systemd[1]: Reload requested from client PID 2192 ('systemctl') (unit session-7.scope)... Sep 8 23:42:58.834399 systemd[1]: Reloading... Sep 8 23:42:58.911189 zram_generator::config[2235]: No configuration found. Sep 8 23:42:59.126001 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:42:59.228738 systemd[1]: Reloading finished in 394 ms. Sep 8 23:42:59.288493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:42:59.290330 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:42:59.292713 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:42:59.293061 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:42:59.294223 systemd[1]: kubelet.service: Consumed 95ms CPU time, 95.1M memory peak. Sep 8 23:42:59.295828 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:42:59.456623 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:42:59.460312 (kubelet)[2283]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:42:59.491859 kubelet[2283]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:42:59.491859 kubelet[2283]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:42:59.491859 kubelet[2283]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:42:59.492251 kubelet[2283]: I0908 23:42:59.491919 2283 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:43:00.894837 kubelet[2283]: I0908 23:43:00.894797 2283 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 8 23:43:00.894837 kubelet[2283]: I0908 23:43:00.894827 2283 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:43:00.895207 kubelet[2283]: I0908 23:43:00.895040 2283 server.go:956] "Client rotation is on, will bootstrap in background" Sep 8 23:43:00.916375 kubelet[2283]: E0908 23:43:00.916319 2283 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 8 23:43:00.916744 kubelet[2283]: I0908 23:43:00.916714 2283 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:43:00.923264 kubelet[2283]: I0908 23:43:00.923242 2283 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 8 23:43:00.925916 kubelet[2283]: I0908 23:43:00.925896 2283 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:43:00.927054 kubelet[2283]: I0908 23:43:00.926992 2283 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:43:00.927196 kubelet[2283]: I0908 23:43:00.927040 2283 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:43:00.927328 kubelet[2283]: I0908 23:43:00.927262 2283 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:43:00.927328 kubelet[2283]: I0908 23:43:00.927271 2283 container_manager_linux.go:303] "Creating device plugin manager" Sep 8 23:43:00.927463 kubelet[2283]: I0908 23:43:00.927446 2283 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:43:00.929874 kubelet[2283]: I0908 23:43:00.929839 2283 kubelet.go:480] "Attempting to sync node with API server" Sep 8 23:43:00.929874 kubelet[2283]: I0908 23:43:00.929865 2283 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:43:00.930010 kubelet[2283]: I0908 23:43:00.929972 2283 kubelet.go:386] "Adding apiserver pod source" Sep 8 23:43:00.932763 kubelet[2283]: I0908 23:43:00.932726 2283 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:43:00.933866 kubelet[2283]: I0908 23:43:00.933559 2283 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 8 23:43:00.934262 kubelet[2283]: I0908 23:43:00.934231 2283 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 8 23:43:00.934359 kubelet[2283]: W0908 23:43:00.934344 2283 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 8 23:43:00.938209 kubelet[2283]: I0908 23:43:00.938180 2283 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:43:00.938273 kubelet[2283]: I0908 23:43:00.938244 2283 server.go:1289] "Started kubelet" Sep 8 23:43:00.938514 kubelet[2283]: E0908 23:43:00.938402 2283 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 8 23:43:00.938777 kubelet[2283]: E0908 23:43:00.938750 2283 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 8 23:43:00.938945 kubelet[2283]: I0908 23:43:00.938907 2283 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:43:00.941101 kubelet[2283]: I0908 23:43:00.940575 2283 server.go:317] "Adding debug handlers to kubelet server" Sep 8 23:43:00.941955 kubelet[2283]: I0908 23:43:00.941899 2283 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:43:00.943690 kubelet[2283]: I0908 23:43:00.942475 2283 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:43:00.944543 kubelet[2283]: I0908 23:43:00.944520 2283 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:43:00.945519 kubelet[2283]: E0908 23:43:00.944212 2283 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863733bf3faed05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-08 23:43:00.938198277 +0000 UTC m=+1.474674880,LastTimestamp:2025-09-08 23:43:00.938198277 +0000 UTC m=+1.474674880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 8 23:43:00.945904 kubelet[2283]: I0908 23:43:00.945883 2283 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:43:00.945988 kubelet[2283]: I0908 23:43:00.945957 2283 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:43:00.946631 kubelet[2283]: E0908 23:43:00.946614 2283 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 8 23:43:00.946926 kubelet[2283]: E0908 23:43:00.946900 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="200ms" Sep 8 23:43:00.947245 kubelet[2283]: E0908 23:43:00.946941 2283 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:43:00.947466 kubelet[2283]: I0908 23:43:00.947039 2283 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:43:00.947548 kubelet[2283]: I0908 23:43:00.947074 2283 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:43:00.947663 kubelet[2283]: E0908 23:43:00.947366 2283 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 8 23:43:00.947751 kubelet[2283]: I0908 23:43:00.947714 2283 factory.go:223] Registration of the systemd container factory successfully Sep 8 23:43:00.947825 kubelet[2283]: I0908 23:43:00.947805 2283 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:43:00.948819 kubelet[2283]: I0908 23:43:00.948796 2283 factory.go:223] Registration of the containerd container factory successfully Sep 8 23:43:00.962433 kubelet[2283]: I0908 23:43:00.962390 2283 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 8 23:43:00.963560 kubelet[2283]: I0908 23:43:00.963523 2283 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 8 23:43:00.963560 kubelet[2283]: I0908 23:43:00.963552 2283 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 8 23:43:00.963647 kubelet[2283]: I0908 23:43:00.963572 2283 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:43:00.963647 kubelet[2283]: I0908 23:43:00.963579 2283 kubelet.go:2436] "Starting kubelet main sync loop" Sep 8 23:43:00.963647 kubelet[2283]: E0908 23:43:00.963622 2283 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:43:00.966798 kubelet[2283]: E0908 23:43:00.966737 2283 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 8 23:43:00.968484 kubelet[2283]: I0908 23:43:00.968442 2283 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:43:00.968484 kubelet[2283]: I0908 23:43:00.968462 2283 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:43:00.968484 kubelet[2283]: I0908 23:43:00.968480 2283 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:43:01.048026 kubelet[2283]: E0908 23:43:01.047941 2283 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 8 23:43:01.064096 kubelet[2283]: E0908 23:43:01.064038 2283 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 8 23:43:01.065007 kubelet[2283]: I0908 23:43:01.064972 2283 policy_none.go:49] "None policy: Start" Sep 8 23:43:01.065007 kubelet[2283]: I0908 23:43:01.064998 2283 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:43:01.065061 kubelet[2283]: I0908 23:43:01.065011 2283 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:43:01.070727 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 8 23:43:01.086526 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 8 23:43:01.091475 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 8 23:43:01.113335 kubelet[2283]: E0908 23:43:01.113241 2283 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 8 23:43:01.113502 kubelet[2283]: I0908 23:43:01.113462 2283 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:43:01.113502 kubelet[2283]: I0908 23:43:01.113475 2283 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:43:01.114301 kubelet[2283]: I0908 23:43:01.114004 2283 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:43:01.114857 kubelet[2283]: E0908 23:43:01.114791 2283 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:43:01.114857 kubelet[2283]: E0908 23:43:01.114830 2283 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 8 23:43:01.148552 kubelet[2283]: E0908 23:43:01.148435 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="400ms" Sep 8 23:43:01.215631 kubelet[2283]: I0908 23:43:01.215593 2283 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:43:01.216190 kubelet[2283]: E0908 23:43:01.216140 2283 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Sep 8 23:43:01.275215 systemd[1]: Created slice kubepods-burstable-pod6d7dbb965a41f2722e375537b647a8b0.slice - libcontainer container kubepods-burstable-pod6d7dbb965a41f2722e375537b647a8b0.slice. Sep 8 23:43:01.287895 kubelet[2283]: E0908 23:43:01.287853 2283 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:43:01.290961 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 8 23:43:01.292972 kubelet[2283]: E0908 23:43:01.292951 2283 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:43:01.295550 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 8 23:43:01.297486 kubelet[2283]: E0908 23:43:01.297457 2283 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:43:01.348791 kubelet[2283]: I0908 23:43:01.348740 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d7dbb965a41f2722e375537b647a8b0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d7dbb965a41f2722e375537b647a8b0\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:43:01.348863 kubelet[2283]: I0908 23:43:01.348808 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:43:01.348863 kubelet[2283]: I0908 23:43:01.348841 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:43:01.348863 kubelet[2283]: I0908 23:43:01.348863 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:43:01.348946 kubelet[2283]: I0908 23:43:01.348877 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d7dbb965a41f2722e375537b647a8b0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d7dbb965a41f2722e375537b647a8b0\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:43:01.348946 kubelet[2283]: I0908 23:43:01.348893 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d7dbb965a41f2722e375537b647a8b0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6d7dbb965a41f2722e375537b647a8b0\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:43:01.348946 kubelet[2283]: I0908 23:43:01.348906 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:43:01.348946 kubelet[2283]: I0908 23:43:01.348920 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:43:01.348946 kubelet[2283]: I0908 23:43:01.348935 2283 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:43:01.418197 kubelet[2283]: I0908 23:43:01.418088 2283 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:43:01.418440 kubelet[2283]: E0908 23:43:01.418415 2283 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Sep 8 23:43:01.549958 kubelet[2283]: E0908 23:43:01.549900 2283 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="800ms" Sep 8 23:43:01.589315 kubelet[2283]: E0908 23:43:01.589281 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:01.589929 containerd[1521]: time="2025-09-08T23:43:01.589895023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6d7dbb965a41f2722e375537b647a8b0,Namespace:kube-system,Attempt:0,}" Sep 8 23:43:01.594271 kubelet[2283]: E0908 23:43:01.594200 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:01.594699 containerd[1521]: time="2025-09-08T23:43:01.594672556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 8 23:43:01.598057 kubelet[2283]: E0908 23:43:01.598037 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:01.598608 containerd[1521]: time="2025-09-08T23:43:01.598584987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 8 23:43:01.616971 containerd[1521]: time="2025-09-08T23:43:01.616924433Z" level=info msg="connecting to shim 54b7400f62876deed8a322bc945363af0fec21b7ae076d69498e5184874a6fea" address="unix:///run/containerd/s/72c9dec91c74cf635ba270b6d238f6e2a148c523380f905f18b89ec4afc06694" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:43:01.647111 containerd[1521]: time="2025-09-08T23:43:01.647060651Z" level=info msg="connecting to shim 2286c338247e0841d55a23d6cc24e755ec8255d7d6a424e8cbd3ed2ce1cfa655" address="unix:///run/containerd/s/3127e9ea6160604c3b242299651052a727cc2eeaf9caee470336743485933000" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:43:01.648383 systemd[1]: Started cri-containerd-54b7400f62876deed8a322bc945363af0fec21b7ae076d69498e5184874a6fea.scope - libcontainer container 54b7400f62876deed8a322bc945363af0fec21b7ae076d69498e5184874a6fea. Sep 8 23:43:01.649651 containerd[1521]: time="2025-09-08T23:43:01.649592642Z" level=info msg="connecting to shim ce19875d622b5b18a8f2662adbb8d4540a39161690875b63e08173e5726343b3" address="unix:///run/containerd/s/f402ce3b3088ff1c12c0eb5c34b3d8ebefe9cf6cbbbb3fb4843526571eec9055" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:43:01.678316 systemd[1]: Started cri-containerd-2286c338247e0841d55a23d6cc24e755ec8255d7d6a424e8cbd3ed2ce1cfa655.scope - libcontainer container 2286c338247e0841d55a23d6cc24e755ec8255d7d6a424e8cbd3ed2ce1cfa655. Sep 8 23:43:01.683234 systemd[1]: Started cri-containerd-ce19875d622b5b18a8f2662adbb8d4540a39161690875b63e08173e5726343b3.scope - libcontainer container ce19875d622b5b18a8f2662adbb8d4540a39161690875b63e08173e5726343b3. Sep 8 23:43:01.722526 containerd[1521]: time="2025-09-08T23:43:01.722465952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:6d7dbb965a41f2722e375537b647a8b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"54b7400f62876deed8a322bc945363af0fec21b7ae076d69498e5184874a6fea\"" Sep 8 23:43:01.724861 kubelet[2283]: E0908 23:43:01.724786 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:01.727113 containerd[1521]: time="2025-09-08T23:43:01.727083072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"ce19875d622b5b18a8f2662adbb8d4540a39161690875b63e08173e5726343b3\"" Sep 8 23:43:01.727726 kubelet[2283]: E0908 23:43:01.727704 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:01.730040 containerd[1521]: time="2025-09-08T23:43:01.729984856Z" level=info msg="CreateContainer within sandbox \"54b7400f62876deed8a322bc945363af0fec21b7ae076d69498e5184874a6fea\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 8 23:43:01.730443 containerd[1521]: time="2025-09-08T23:43:01.730419007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"2286c338247e0841d55a23d6cc24e755ec8255d7d6a424e8cbd3ed2ce1cfa655\"" Sep 8 23:43:01.730982 kubelet[2283]: E0908 23:43:01.730923 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:01.731562 containerd[1521]: time="2025-09-08T23:43:01.731523390Z" level=info msg="CreateContainer within sandbox \"ce19875d622b5b18a8f2662adbb8d4540a39161690875b63e08173e5726343b3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 8 23:43:01.734426 containerd[1521]: time="2025-09-08T23:43:01.734391277Z" level=info msg="CreateContainer within sandbox \"2286c338247e0841d55a23d6cc24e755ec8255d7d6a424e8cbd3ed2ce1cfa655\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 8 23:43:01.739697 containerd[1521]: time="2025-09-08T23:43:01.739662441Z" level=info msg="Container 74425b7c84d0dfb503251a5a4da2f7e43878d95955910cda9e85f99f3d2d15b4: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:43:01.741748 containerd[1521]: time="2025-09-08T23:43:01.741715911Z" level=info msg="Container f6a60417b43a89dca842c16e2022b57c91ecc06b8c5d28ceb303b997e6a5638a: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:43:01.745023 containerd[1521]: time="2025-09-08T23:43:01.744131220Z" level=info msg="Container cc7f0b57dbcd8909eccc50f52c5d005ba5c51f0a42e92edf813ca908b4484b9c: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:43:01.747817 containerd[1521]: time="2025-09-08T23:43:01.747781345Z" level=info msg="CreateContainer within sandbox \"54b7400f62876deed8a322bc945363af0fec21b7ae076d69498e5184874a6fea\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"74425b7c84d0dfb503251a5a4da2f7e43878d95955910cda9e85f99f3d2d15b4\"" Sep 8 23:43:01.748704 containerd[1521]: time="2025-09-08T23:43:01.748674669Z" level=info msg="StartContainer for \"74425b7c84d0dfb503251a5a4da2f7e43878d95955910cda9e85f99f3d2d15b4\"" Sep 8 23:43:01.749875 containerd[1521]: time="2025-09-08T23:43:01.749833576Z" level=info msg="connecting to shim 74425b7c84d0dfb503251a5a4da2f7e43878d95955910cda9e85f99f3d2d15b4" address="unix:///run/containerd/s/72c9dec91c74cf635ba270b6d238f6e2a148c523380f905f18b89ec4afc06694" protocol=ttrpc version=3 Sep 8 23:43:01.753109 containerd[1521]: time="2025-09-08T23:43:01.753063982Z" level=info msg="CreateContainer within sandbox \"ce19875d622b5b18a8f2662adbb8d4540a39161690875b63e08173e5726343b3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f6a60417b43a89dca842c16e2022b57c91ecc06b8c5d28ceb303b997e6a5638a\"" Sep 8 23:43:01.753997 containerd[1521]: time="2025-09-08T23:43:01.753911816Z" level=info msg="CreateContainer within sandbox \"2286c338247e0841d55a23d6cc24e755ec8255d7d6a424e8cbd3ed2ce1cfa655\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"cc7f0b57dbcd8909eccc50f52c5d005ba5c51f0a42e92edf813ca908b4484b9c\"" Sep 8 23:43:01.754385 containerd[1521]: time="2025-09-08T23:43:01.754359437Z" level=info msg="StartContainer for \"f6a60417b43a89dca842c16e2022b57c91ecc06b8c5d28ceb303b997e6a5638a\"" Sep 8 23:43:01.754569 containerd[1521]: time="2025-09-08T23:43:01.754518451Z" level=info msg="StartContainer for \"cc7f0b57dbcd8909eccc50f52c5d005ba5c51f0a42e92edf813ca908b4484b9c\"" Sep 8 23:43:01.755375 containerd[1521]: time="2025-09-08T23:43:01.755346739Z" level=info msg="connecting to shim f6a60417b43a89dca842c16e2022b57c91ecc06b8c5d28ceb303b997e6a5638a" address="unix:///run/containerd/s/f402ce3b3088ff1c12c0eb5c34b3d8ebefe9cf6cbbbb3fb4843526571eec9055" protocol=ttrpc version=3 Sep 8 23:43:01.756177 containerd[1521]: time="2025-09-08T23:43:01.755902048Z" level=info msg="connecting to shim cc7f0b57dbcd8909eccc50f52c5d005ba5c51f0a42e92edf813ca908b4484b9c" address="unix:///run/containerd/s/3127e9ea6160604c3b242299651052a727cc2eeaf9caee470336743485933000" protocol=ttrpc version=3 Sep 8 23:43:01.781399 systemd[1]: Started cri-containerd-74425b7c84d0dfb503251a5a4da2f7e43878d95955910cda9e85f99f3d2d15b4.scope - libcontainer container 74425b7c84d0dfb503251a5a4da2f7e43878d95955910cda9e85f99f3d2d15b4. Sep 8 23:43:01.782397 systemd[1]: Started cri-containerd-cc7f0b57dbcd8909eccc50f52c5d005ba5c51f0a42e92edf813ca908b4484b9c.scope - libcontainer container cc7f0b57dbcd8909eccc50f52c5d005ba5c51f0a42e92edf813ca908b4484b9c. Sep 8 23:43:01.783273 systemd[1]: Started cri-containerd-f6a60417b43a89dca842c16e2022b57c91ecc06b8c5d28ceb303b997e6a5638a.scope - libcontainer container f6a60417b43a89dca842c16e2022b57c91ecc06b8c5d28ceb303b997e6a5638a. Sep 8 23:43:01.822635 kubelet[2283]: I0908 23:43:01.822601 2283 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:43:01.822937 kubelet[2283]: E0908 23:43:01.822916 2283 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Sep 8 23:43:01.830438 containerd[1521]: time="2025-09-08T23:43:01.830341913Z" level=info msg="StartContainer for \"cc7f0b57dbcd8909eccc50f52c5d005ba5c51f0a42e92edf813ca908b4484b9c\" returns successfully" Sep 8 23:43:01.832735 containerd[1521]: time="2025-09-08T23:43:01.832568428Z" level=info msg="StartContainer for \"74425b7c84d0dfb503251a5a4da2f7e43878d95955910cda9e85f99f3d2d15b4\" returns successfully" Sep 8 23:43:01.836518 kubelet[2283]: E0908 23:43:01.835908 2283 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 8 23:43:01.836834 containerd[1521]: time="2025-09-08T23:43:01.836756475Z" level=info msg="StartContainer for \"f6a60417b43a89dca842c16e2022b57c91ecc06b8c5d28ceb303b997e6a5638a\" returns successfully" Sep 8 23:43:01.977634 kubelet[2283]: E0908 23:43:01.977543 2283 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:43:01.977918 kubelet[2283]: E0908 23:43:01.977685 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:01.979763 kubelet[2283]: E0908 23:43:01.979735 2283 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:43:01.979856 kubelet[2283]: E0908 23:43:01.979837 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:01.982983 kubelet[2283]: E0908 23:43:01.982964 2283 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:43:01.983077 kubelet[2283]: E0908 23:43:01.983061 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:02.626351 kubelet[2283]: I0908 23:43:02.626318 2283 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:43:02.986551 kubelet[2283]: E0908 23:43:02.986461 2283 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:43:02.986855 kubelet[2283]: E0908 23:43:02.986594 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:02.987133 kubelet[2283]: E0908 23:43:02.987108 2283 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 8 23:43:02.987244 kubelet[2283]: E0908 23:43:02.987227 2283 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:03.762328 kubelet[2283]: E0908 23:43:03.762277 2283 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 8 23:43:03.827545 kubelet[2283]: I0908 23:43:03.827504 2283 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:43:03.848457 kubelet[2283]: I0908 23:43:03.847884 2283 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:43:03.903338 kubelet[2283]: E0908 23:43:03.903267 2283 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 8 23:43:03.903474 kubelet[2283]: I0908 23:43:03.903461 2283 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:43:03.905988 kubelet[2283]: E0908 23:43:03.905966 2283 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:43:03.906258 kubelet[2283]: I0908 23:43:03.906072 2283 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:43:03.908274 kubelet[2283]: E0908 23:43:03.908192 2283 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 8 23:43:03.934536 kubelet[2283]: I0908 23:43:03.934516 2283 apiserver.go:52] "Watching apiserver" Sep 8 23:43:03.948062 kubelet[2283]: I0908 23:43:03.948042 2283 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:43:05.904668 systemd[1]: Reload requested from client PID 2565 ('systemctl') (unit session-7.scope)... Sep 8 23:43:05.904685 systemd[1]: Reloading... Sep 8 23:43:05.972186 zram_generator::config[2608]: No configuration found. Sep 8 23:43:06.044388 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Sep 8 23:43:06.167423 systemd[1]: Reloading finished in 262 ms. Sep 8 23:43:06.190708 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:43:06.207556 systemd[1]: kubelet.service: Deactivated successfully. Sep 8 23:43:06.207922 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:43:06.208025 systemd[1]: kubelet.service: Consumed 1.849s CPU time, 128.1M memory peak. Sep 8 23:43:06.210354 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 8 23:43:06.335925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 8 23:43:06.340703 (kubelet)[2650]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 8 23:43:06.385202 kubelet[2650]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:43:06.385202 kubelet[2650]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 8 23:43:06.385202 kubelet[2650]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 8 23:43:06.385563 kubelet[2650]: I0908 23:43:06.385265 2650 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 8 23:43:06.390547 kubelet[2650]: I0908 23:43:06.390509 2650 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 8 23:43:06.390547 kubelet[2650]: I0908 23:43:06.390537 2650 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 8 23:43:06.390738 kubelet[2650]: I0908 23:43:06.390721 2650 server.go:956] "Client rotation is on, will bootstrap in background" Sep 8 23:43:06.391992 kubelet[2650]: I0908 23:43:06.391968 2650 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 8 23:43:06.396791 kubelet[2650]: I0908 23:43:06.396694 2650 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 8 23:43:06.400409 kubelet[2650]: I0908 23:43:06.400384 2650 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 8 23:43:06.403771 kubelet[2650]: I0908 23:43:06.403750 2650 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 8 23:43:06.404102 kubelet[2650]: I0908 23:43:06.404073 2650 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 8 23:43:06.404359 kubelet[2650]: I0908 23:43:06.404200 2650 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 8 23:43:06.404499 kubelet[2650]: I0908 23:43:06.404485 2650 topology_manager.go:138] "Creating topology manager with none policy" Sep 8 23:43:06.404559 kubelet[2650]: I0908 23:43:06.404549 2650 container_manager_linux.go:303] "Creating device plugin manager" Sep 8 23:43:06.404649 kubelet[2650]: I0908 23:43:06.404639 2650 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:43:06.404891 kubelet[2650]: I0908 23:43:06.404873 2650 kubelet.go:480] "Attempting to sync node with API server" Sep 8 23:43:06.404992 kubelet[2650]: I0908 23:43:06.404977 2650 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 8 23:43:06.405087 kubelet[2650]: I0908 23:43:06.405077 2650 kubelet.go:386] "Adding apiserver pod source" Sep 8 23:43:06.405270 kubelet[2650]: I0908 23:43:06.405257 2650 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 8 23:43:06.409197 kubelet[2650]: I0908 23:43:06.407968 2650 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Sep 8 23:43:06.410218 kubelet[2650]: I0908 23:43:06.409762 2650 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 8 23:43:06.416404 kubelet[2650]: I0908 23:43:06.416372 2650 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 8 23:43:06.416473 kubelet[2650]: I0908 23:43:06.416421 2650 server.go:1289] "Started kubelet" Sep 8 23:43:06.416589 kubelet[2650]: I0908 23:43:06.416537 2650 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 8 23:43:06.416807 kubelet[2650]: I0908 23:43:06.416744 2650 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 8 23:43:06.417051 kubelet[2650]: I0908 23:43:06.417021 2650 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 8 23:43:06.419208 kubelet[2650]: I0908 23:43:06.417631 2650 server.go:317] "Adding debug handlers to kubelet server" Sep 8 23:43:06.420456 kubelet[2650]: I0908 23:43:06.420435 2650 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 8 23:43:06.424956 kubelet[2650]: I0908 23:43:06.424922 2650 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 8 23:43:06.425477 kubelet[2650]: I0908 23:43:06.425432 2650 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 8 23:43:06.427321 kubelet[2650]: I0908 23:43:06.427293 2650 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 8 23:43:06.428169 kubelet[2650]: I0908 23:43:06.428136 2650 factory.go:223] Registration of the systemd container factory successfully Sep 8 23:43:06.428282 kubelet[2650]: I0908 23:43:06.428260 2650 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 8 23:43:06.428867 kubelet[2650]: I0908 23:43:06.428833 2650 reconciler.go:26] "Reconciler: start to sync state" Sep 8 23:43:06.430062 kubelet[2650]: I0908 23:43:06.429728 2650 factory.go:223] Registration of the containerd container factory successfully Sep 8 23:43:06.437780 kubelet[2650]: I0908 23:43:06.437754 2650 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 8 23:43:06.439510 kubelet[2650]: I0908 23:43:06.439200 2650 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 8 23:43:06.439510 kubelet[2650]: I0908 23:43:06.439222 2650 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 8 23:43:06.439510 kubelet[2650]: I0908 23:43:06.439246 2650 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 8 23:43:06.439510 kubelet[2650]: I0908 23:43:06.439255 2650 kubelet.go:2436] "Starting kubelet main sync loop" Sep 8 23:43:06.439510 kubelet[2650]: E0908 23:43:06.439290 2650 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 8 23:43:06.466811 kubelet[2650]: I0908 23:43:06.466781 2650 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 8 23:43:06.466811 kubelet[2650]: I0908 23:43:06.466800 2650 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 8 23:43:06.466811 kubelet[2650]: I0908 23:43:06.466820 2650 state_mem.go:36] "Initialized new in-memory state store" Sep 8 23:43:06.466961 kubelet[2650]: I0908 23:43:06.466938 2650 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 8 23:43:06.466997 kubelet[2650]: I0908 23:43:06.466958 2650 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 8 23:43:06.466997 kubelet[2650]: I0908 23:43:06.466974 2650 policy_none.go:49] "None policy: Start" Sep 8 23:43:06.466997 kubelet[2650]: I0908 23:43:06.466983 2650 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 8 23:43:06.466997 kubelet[2650]: I0908 23:43:06.466991 2650 state_mem.go:35] "Initializing new in-memory state store" Sep 8 23:43:06.467085 kubelet[2650]: I0908 23:43:06.467070 2650 state_mem.go:75] "Updated machine memory state" Sep 8 23:43:06.472058 kubelet[2650]: E0908 23:43:06.472024 2650 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 8 23:43:06.472262 kubelet[2650]: I0908 23:43:06.472239 2650 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 8 23:43:06.472312 kubelet[2650]: I0908 23:43:06.472256 2650 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 8 23:43:06.472966 kubelet[2650]: I0908 23:43:06.472678 2650 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 8 23:43:06.475277 kubelet[2650]: E0908 23:43:06.475252 2650 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 8 23:43:06.540327 kubelet[2650]: I0908 23:43:06.540293 2650 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:43:06.540327 kubelet[2650]: I0908 23:43:06.540328 2650 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:43:06.540670 kubelet[2650]: I0908 23:43:06.540563 2650 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 8 23:43:06.579395 kubelet[2650]: I0908 23:43:06.579347 2650 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 8 23:43:06.585827 kubelet[2650]: I0908 23:43:06.585759 2650 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 8 23:43:06.585968 kubelet[2650]: I0908 23:43:06.585955 2650 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 8 23:43:06.630535 kubelet[2650]: I0908 23:43:06.630500 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 8 23:43:06.630770 kubelet[2650]: I0908 23:43:06.630662 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d7dbb965a41f2722e375537b647a8b0-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d7dbb965a41f2722e375537b647a8b0\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:43:06.630770 kubelet[2650]: I0908 23:43:06.630689 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:43:06.630770 kubelet[2650]: I0908 23:43:06.630712 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:43:06.630770 kubelet[2650]: I0908 23:43:06.630731 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d7dbb965a41f2722e375537b647a8b0-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"6d7dbb965a41f2722e375537b647a8b0\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:43:06.630770 kubelet[2650]: I0908 23:43:06.630747 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d7dbb965a41f2722e375537b647a8b0-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"6d7dbb965a41f2722e375537b647a8b0\") " pod="kube-system/kube-apiserver-localhost" Sep 8 23:43:06.630891 kubelet[2650]: I0908 23:43:06.630801 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:43:06.630891 kubelet[2650]: I0908 23:43:06.630859 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:43:06.630891 kubelet[2650]: I0908 23:43:06.630879 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 8 23:43:06.845893 kubelet[2650]: E0908 23:43:06.845270 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:06.845893 kubelet[2650]: E0908 23:43:06.845293 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:06.846507 kubelet[2650]: E0908 23:43:06.846452 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:06.909666 sudo[2693]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 8 23:43:06.910104 sudo[2693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 8 23:43:07.354060 sudo[2693]: pam_unix(sudo:session): session closed for user root Sep 8 23:43:07.406105 kubelet[2650]: I0908 23:43:07.406071 2650 apiserver.go:52] "Watching apiserver" Sep 8 23:43:07.427719 kubelet[2650]: I0908 23:43:07.427670 2650 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 8 23:43:07.456192 kubelet[2650]: I0908 23:43:07.454303 2650 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 8 23:43:07.456192 kubelet[2650]: I0908 23:43:07.454313 2650 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 8 23:43:07.456192 kubelet[2650]: E0908 23:43:07.454788 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:07.488744 kubelet[2650]: E0908 23:43:07.488284 2650 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 8 23:43:07.489113 kubelet[2650]: E0908 23:43:07.489082 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:07.513897 kubelet[2650]: E0908 23:43:07.513716 2650 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 8 23:43:07.514476 kubelet[2650]: I0908 23:43:07.513970 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.513954845 podStartE2EDuration="1.513954845s" podCreationTimestamp="2025-09-08 23:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:43:07.508350602 +0000 UTC m=+1.164413634" watchObservedRunningTime="2025-09-08 23:43:07.513954845 +0000 UTC m=+1.170017877" Sep 8 23:43:07.514555 kubelet[2650]: E0908 23:43:07.514356 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:07.531344 kubelet[2650]: I0908 23:43:07.531250 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.5312345939999998 podStartE2EDuration="1.531234594s" podCreationTimestamp="2025-09-08 23:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:43:07.530965194 +0000 UTC m=+1.187028226" watchObservedRunningTime="2025-09-08 23:43:07.531234594 +0000 UTC m=+1.187297626" Sep 8 23:43:07.531344 kubelet[2650]: I0908 23:43:07.531334 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.531327206 podStartE2EDuration="1.531327206s" podCreationTimestamp="2025-09-08 23:43:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:43:07.522727659 +0000 UTC m=+1.178790731" watchObservedRunningTime="2025-09-08 23:43:07.531327206 +0000 UTC m=+1.187390238" Sep 8 23:43:08.455985 kubelet[2650]: E0908 23:43:08.455901 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:08.455985 kubelet[2650]: E0908 23:43:08.455951 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:08.655828 sudo[1733]: pam_unix(sudo:session): session closed for user root Sep 8 23:43:08.656892 sshd[1732]: Connection closed by 10.0.0.1 port 35788 Sep 8 23:43:08.658397 sshd-session[1730]: pam_unix(sshd:session): session closed for user core Sep 8 23:43:08.661658 systemd[1]: sshd@6-10.0.0.77:22-10.0.0.1:35788.service: Deactivated successfully. Sep 8 23:43:08.663470 systemd[1]: session-7.scope: Deactivated successfully. Sep 8 23:43:08.663719 systemd[1]: session-7.scope: Consumed 7.103s CPU time, 256.6M memory peak. Sep 8 23:43:08.664652 systemd-logind[1491]: Session 7 logged out. Waiting for processes to exit. Sep 8 23:43:08.665846 systemd-logind[1491]: Removed session 7. Sep 8 23:43:11.142743 kubelet[2650]: E0908 23:43:11.142713 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:11.215420 kubelet[2650]: I0908 23:43:11.215381 2650 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 8 23:43:11.215724 containerd[1521]: time="2025-09-08T23:43:11.215693884Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 8 23:43:11.216425 kubelet[2650]: I0908 23:43:11.216398 2650 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 8 23:43:11.462562 kubelet[2650]: E0908 23:43:11.462405 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:11.777809 systemd[1]: Created slice kubepods-besteffort-pod00476f70_efc5_41b2_a6c4_37af519991a0.slice - libcontainer container kubepods-besteffort-pod00476f70_efc5_41b2_a6c4_37af519991a0.slice. Sep 8 23:43:11.864655 kubelet[2650]: I0908 23:43:11.864574 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00476f70-efc5-41b2-a6c4-37af519991a0-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-lsndx\" (UID: \"00476f70-efc5-41b2-a6c4-37af519991a0\") " pod="kube-system/cilium-operator-6c4d7847fc-lsndx" Sep 8 23:43:11.864655 kubelet[2650]: I0908 23:43:11.864625 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrgbj\" (UniqueName: \"kubernetes.io/projected/00476f70-efc5-41b2-a6c4-37af519991a0-kube-api-access-jrgbj\") pod \"cilium-operator-6c4d7847fc-lsndx\" (UID: \"00476f70-efc5-41b2-a6c4-37af519991a0\") " pod="kube-system/cilium-operator-6c4d7847fc-lsndx" Sep 8 23:43:11.997506 systemd[1]: Created slice kubepods-besteffort-pod86a4f50a_5f8d_48a7_9b98_55afaf045945.slice - libcontainer container kubepods-besteffort-pod86a4f50a_5f8d_48a7_9b98_55afaf045945.slice. Sep 8 23:43:12.017365 systemd[1]: Created slice kubepods-burstable-pod3aab31a6_a275_4dfa_bd80_dbb77785a728.slice - libcontainer container kubepods-burstable-pod3aab31a6_a275_4dfa_bd80_dbb77785a728.slice. Sep 8 23:43:12.066641 kubelet[2650]: I0908 23:43:12.066510 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-host-proc-sys-kernel\") pod \"cilium-c5kmx\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " pod="kube-system/cilium-c5kmx" Sep 8 23:43:12.066641 kubelet[2650]: I0908 23:43:12.066556 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfw88\" (UniqueName: \"kubernetes.io/projected/3aab31a6-a275-4dfa-bd80-dbb77785a728-kube-api-access-rfw88\") pod \"cilium-c5kmx\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " pod="kube-system/cilium-c5kmx" Sep 8 23:43:12.066641 kubelet[2650]: I0908 23:43:12.066580 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-cni-path\") pod \"cilium-c5kmx\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " pod="kube-system/cilium-c5kmx" Sep 8 23:43:12.066641 kubelet[2650]: I0908 23:43:12.066597 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3aab31a6-a275-4dfa-bd80-dbb77785a728-cilium-config-path\") pod \"cilium-c5kmx\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " pod="kube-system/cilium-c5kmx" Sep 8 23:43:12.066641 kubelet[2650]: I0908 23:43:12.066616 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/86a4f50a-5f8d-48a7-9b98-55afaf045945-lib-modules\") pod \"kube-proxy-zvrm7\" (UID: \"86a4f50a-5f8d-48a7-9b98-55afaf045945\") " pod="kube-system/kube-proxy-zvrm7" Sep 8 23:43:12.066852 kubelet[2650]: I0908 23:43:12.066771 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/86a4f50a-5f8d-48a7-9b98-55afaf045945-xtables-lock\") pod \"kube-proxy-zvrm7\" (UID: \"86a4f50a-5f8d-48a7-9b98-55afaf045945\") " pod="kube-system/kube-proxy-zvrm7" Sep 8 23:43:12.066852 kubelet[2650]: I0908 23:43:12.066793 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx2z8\" (UniqueName: \"kubernetes.io/projected/86a4f50a-5f8d-48a7-9b98-55afaf045945-kube-api-access-rx2z8\") pod \"kube-proxy-zvrm7\" (UID: \"86a4f50a-5f8d-48a7-9b98-55afaf045945\") " pod="kube-system/kube-proxy-zvrm7" Sep 8 23:43:12.067890 kubelet[2650]: I0908 23:43:12.067375 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-bpf-maps\") pod \"cilium-c5kmx\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " pod="kube-system/cilium-c5kmx" Sep 8 23:43:12.067890 kubelet[2650]: I0908 23:43:12.067407 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-xtables-lock\") pod \"cilium-c5kmx\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " pod="kube-system/cilium-c5kmx" Sep 8 23:43:12.067890 kubelet[2650]: I0908 23:43:12.067444 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3aab31a6-a275-4dfa-bd80-dbb77785a728-clustermesh-secrets\") pod \"cilium-c5kmx\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " pod="kube-system/cilium-c5kmx" Sep 8 23:43:12.067890 kubelet[2650]: I0908 23:43:12.067499 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-host-proc-sys-net\") pod \"cilium-c5kmx\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " pod="kube-system/cilium-c5kmx" Sep 8 23:43:12.067890 kubelet[2650]: I0908 23:43:12.067539 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-cilium-run\") pod \"cilium-c5kmx\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " pod="kube-system/cilium-c5kmx" Sep 8 23:43:12.067890 kubelet[2650]: I0908 23:43:12.067557 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-hostproc\") pod \"cilium-c5kmx\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " pod="kube-system/cilium-c5kmx" Sep 8 23:43:12.068088 kubelet[2650]: I0908 23:43:12.067595 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-etc-cni-netd\") pod \"cilium-c5kmx\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " pod="kube-system/cilium-c5kmx" Sep 8 23:43:12.068088 kubelet[2650]: I0908 23:43:12.067611 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3aab31a6-a275-4dfa-bd80-dbb77785a728-hubble-tls\") pod \"cilium-c5kmx\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " pod="kube-system/cilium-c5kmx" Sep 8 23:43:12.068088 kubelet[2650]: I0908 23:43:12.067627 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/86a4f50a-5f8d-48a7-9b98-55afaf045945-kube-proxy\") pod \"kube-proxy-zvrm7\" (UID: \"86a4f50a-5f8d-48a7-9b98-55afaf045945\") " pod="kube-system/kube-proxy-zvrm7" Sep 8 23:43:12.068088 kubelet[2650]: I0908 23:43:12.067674 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-cilium-cgroup\") pod \"cilium-c5kmx\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " pod="kube-system/cilium-c5kmx" Sep 8 23:43:12.068088 kubelet[2650]: I0908 23:43:12.067698 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-lib-modules\") pod \"cilium-c5kmx\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " pod="kube-system/cilium-c5kmx" Sep 8 23:43:12.089719 kubelet[2650]: E0908 23:43:12.089675 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:12.090401 containerd[1521]: time="2025-09-08T23:43:12.090326579Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lsndx,Uid:00476f70-efc5-41b2-a6c4-37af519991a0,Namespace:kube-system,Attempt:0,}" Sep 8 23:43:12.128386 containerd[1521]: time="2025-09-08T23:43:12.128321082Z" level=info msg="connecting to shim 2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33" address="unix:///run/containerd/s/d938f3b0e492ab59bb8d1bb09992c756cce7a4b9f9912db3912428c9beb35a97" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:43:12.161367 systemd[1]: Started cri-containerd-2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33.scope - libcontainer container 2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33. Sep 8 23:43:12.216462 containerd[1521]: time="2025-09-08T23:43:12.216407131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-lsndx,Uid:00476f70-efc5-41b2-a6c4-37af519991a0,Namespace:kube-system,Attempt:0,} returns sandbox id \"2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33\"" Sep 8 23:43:12.220808 kubelet[2650]: E0908 23:43:12.220778 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:12.227590 containerd[1521]: time="2025-09-08T23:43:12.227547915Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 8 23:43:12.304028 kubelet[2650]: E0908 23:43:12.303991 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:12.304563 containerd[1521]: time="2025-09-08T23:43:12.304526781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zvrm7,Uid:86a4f50a-5f8d-48a7-9b98-55afaf045945,Namespace:kube-system,Attempt:0,}" Sep 8 23:43:12.321577 kubelet[2650]: E0908 23:43:12.321479 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:12.322967 containerd[1521]: time="2025-09-08T23:43:12.322938543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c5kmx,Uid:3aab31a6-a275-4dfa-bd80-dbb77785a728,Namespace:kube-system,Attempt:0,}" Sep 8 23:43:12.333726 containerd[1521]: time="2025-09-08T23:43:12.333674615Z" level=info msg="connecting to shim f101b077d91297538873e61c7b85b2c304b08e509014e2620dcbde4cb1779676" address="unix:///run/containerd/s/a39f60b37d3bc4029f441109cdf6fbbc56fac9e931d061e37af0887325a1d8d8" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:43:12.344634 containerd[1521]: time="2025-09-08T23:43:12.344588803Z" level=info msg="connecting to shim 731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6" address="unix:///run/containerd/s/55cb6ffd01c06b4d89e8d1399a70eb7183f78855f58b2e81f58825133c8748ec" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:43:12.380447 systemd[1]: Started cri-containerd-731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6.scope - libcontainer container 731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6. Sep 8 23:43:12.383065 systemd[1]: Started cri-containerd-f101b077d91297538873e61c7b85b2c304b08e509014e2620dcbde4cb1779676.scope - libcontainer container f101b077d91297538873e61c7b85b2c304b08e509014e2620dcbde4cb1779676. Sep 8 23:43:12.408198 containerd[1521]: time="2025-09-08T23:43:12.408137209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-c5kmx,Uid:3aab31a6-a275-4dfa-bd80-dbb77785a728,Namespace:kube-system,Attempt:0,} returns sandbox id \"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\"" Sep 8 23:43:12.408816 kubelet[2650]: E0908 23:43:12.408792 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:12.419741 containerd[1521]: time="2025-09-08T23:43:12.419704904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zvrm7,Uid:86a4f50a-5f8d-48a7-9b98-55afaf045945,Namespace:kube-system,Attempt:0,} returns sandbox id \"f101b077d91297538873e61c7b85b2c304b08e509014e2620dcbde4cb1779676\"" Sep 8 23:43:12.420422 kubelet[2650]: E0908 23:43:12.420399 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:12.427874 containerd[1521]: time="2025-09-08T23:43:12.427839786Z" level=info msg="CreateContainer within sandbox \"f101b077d91297538873e61c7b85b2c304b08e509014e2620dcbde4cb1779676\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 8 23:43:12.434653 containerd[1521]: time="2025-09-08T23:43:12.434619015Z" level=info msg="Container a13dbee872cb231a313408a1d9618bb8f79110a96dc1c6187d644b494411d3fc: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:43:12.441149 containerd[1521]: time="2025-09-08T23:43:12.441043530Z" level=info msg="CreateContainer within sandbox \"f101b077d91297538873e61c7b85b2c304b08e509014e2620dcbde4cb1779676\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a13dbee872cb231a313408a1d9618bb8f79110a96dc1c6187d644b494411d3fc\"" Sep 8 23:43:12.443305 containerd[1521]: time="2025-09-08T23:43:12.443279007Z" level=info msg="StartContainer for \"a13dbee872cb231a313408a1d9618bb8f79110a96dc1c6187d644b494411d3fc\"" Sep 8 23:43:12.444724 containerd[1521]: time="2025-09-08T23:43:12.444693219Z" level=info msg="connecting to shim a13dbee872cb231a313408a1d9618bb8f79110a96dc1c6187d644b494411d3fc" address="unix:///run/containerd/s/a39f60b37d3bc4029f441109cdf6fbbc56fac9e931d061e37af0887325a1d8d8" protocol=ttrpc version=3 Sep 8 23:43:12.464485 systemd[1]: Started cri-containerd-a13dbee872cb231a313408a1d9618bb8f79110a96dc1c6187d644b494411d3fc.scope - libcontainer container a13dbee872cb231a313408a1d9618bb8f79110a96dc1c6187d644b494411d3fc. Sep 8 23:43:12.514371 containerd[1521]: time="2025-09-08T23:43:12.514293908Z" level=info msg="StartContainer for \"a13dbee872cb231a313408a1d9618bb8f79110a96dc1c6187d644b494411d3fc\" returns successfully" Sep 8 23:43:13.478910 kubelet[2650]: E0908 23:43:13.478868 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:13.491120 kubelet[2650]: I0908 23:43:13.491049 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zvrm7" podStartSLOduration=2.49103346 podStartE2EDuration="2.49103346s" podCreationTimestamp="2025-09-08 23:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:43:13.490705986 +0000 UTC m=+7.146769018" watchObservedRunningTime="2025-09-08 23:43:13.49103346 +0000 UTC m=+7.147096492" Sep 8 23:43:14.480984 kubelet[2650]: E0908 23:43:14.480936 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:14.956357 containerd[1521]: time="2025-09-08T23:43:14.956198980Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:43:14.957274 containerd[1521]: time="2025-09-08T23:43:14.957246682Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 8 23:43:14.958226 containerd[1521]: time="2025-09-08T23:43:14.958196945Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:43:14.959174 containerd[1521]: time="2025-09-08T23:43:14.959103529Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.731504336s" Sep 8 23:43:14.959759 containerd[1521]: time="2025-09-08T23:43:14.959647440Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 8 23:43:14.961178 containerd[1521]: time="2025-09-08T23:43:14.961022056Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 8 23:43:14.965436 containerd[1521]: time="2025-09-08T23:43:14.965395060Z" level=info msg="CreateContainer within sandbox \"2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 8 23:43:14.972211 containerd[1521]: time="2025-09-08T23:43:14.972105943Z" level=info msg="Container 44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:43:14.976919 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2277497007.mount: Deactivated successfully. Sep 8 23:43:14.980111 containerd[1521]: time="2025-09-08T23:43:14.980057325Z" level=info msg="CreateContainer within sandbox \"2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\"" Sep 8 23:43:14.980704 containerd[1521]: time="2025-09-08T23:43:14.980631515Z" level=info msg="StartContainer for \"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\"" Sep 8 23:43:14.982070 containerd[1521]: time="2025-09-08T23:43:14.981542539Z" level=info msg="connecting to shim 44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793" address="unix:///run/containerd/s/d938f3b0e492ab59bb8d1bb09992c756cce7a4b9f9912db3912428c9beb35a97" protocol=ttrpc version=3 Sep 8 23:43:15.013379 systemd[1]: Started cri-containerd-44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793.scope - libcontainer container 44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793. Sep 8 23:43:15.037488 kubelet[2650]: E0908 23:43:15.037454 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:15.082662 containerd[1521]: time="2025-09-08T23:43:15.082604139Z" level=info msg="StartContainer for \"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\" returns successfully" Sep 8 23:43:15.137407 kubelet[2650]: E0908 23:43:15.137345 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:15.484894 kubelet[2650]: E0908 23:43:15.484588 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:15.484894 kubelet[2650]: E0908 23:43:15.484656 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:15.485362 kubelet[2650]: E0908 23:43:15.485053 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:15.502378 kubelet[2650]: I0908 23:43:15.502320 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-lsndx" podStartSLOduration=1.768212948 podStartE2EDuration="4.502303316s" podCreationTimestamp="2025-09-08 23:43:11 +0000 UTC" firstStartedPulling="2025-09-08 23:43:12.226626533 +0000 UTC m=+5.882689565" lastFinishedPulling="2025-09-08 23:43:14.960716901 +0000 UTC m=+8.616779933" observedRunningTime="2025-09-08 23:43:15.501772084 +0000 UTC m=+9.157835116" watchObservedRunningTime="2025-09-08 23:43:15.502303316 +0000 UTC m=+9.158366348" Sep 8 23:43:16.485930 kubelet[2650]: E0908 23:43:16.485896 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:20.598475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount150826586.mount: Deactivated successfully. Sep 8 23:43:21.566795 update_engine[1499]: I20250908 23:43:21.565795 1499 update_attempter.cc:509] Updating boot flags... Sep 8 23:43:21.946726 containerd[1521]: time="2025-09-08T23:43:21.946619802Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:43:21.947310 containerd[1521]: time="2025-09-08T23:43:21.947012718Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 8 23:43:21.947960 containerd[1521]: time="2025-09-08T23:43:21.947928467Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 8 23:43:21.949518 containerd[1521]: time="2025-09-08T23:43:21.949488768Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.988426353s" Sep 8 23:43:21.949518 containerd[1521]: time="2025-09-08T23:43:21.949517368Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 8 23:43:21.966427 containerd[1521]: time="2025-09-08T23:43:21.966386325Z" level=info msg="CreateContainer within sandbox \"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:43:21.974631 containerd[1521]: time="2025-09-08T23:43:21.974598987Z" level=info msg="Container 8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:43:21.988832 containerd[1521]: time="2025-09-08T23:43:21.988791097Z" level=info msg="CreateContainer within sandbox \"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e\"" Sep 8 23:43:21.989434 containerd[1521]: time="2025-09-08T23:43:21.989346050Z" level=info msg="StartContainer for \"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e\"" Sep 8 23:43:21.990113 containerd[1521]: time="2025-09-08T23:43:21.990073521Z" level=info msg="connecting to shim 8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e" address="unix:///run/containerd/s/55cb6ffd01c06b4d89e8d1399a70eb7183f78855f58b2e81f58825133c8748ec" protocol=ttrpc version=3 Sep 8 23:43:22.012340 systemd[1]: Started cri-containerd-8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e.scope - libcontainer container 8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e. Sep 8 23:43:22.036948 containerd[1521]: time="2025-09-08T23:43:22.036842982Z" level=info msg="StartContainer for \"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e\" returns successfully" Sep 8 23:43:22.049944 systemd[1]: cri-containerd-8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e.scope: Deactivated successfully. Sep 8 23:43:22.068813 containerd[1521]: time="2025-09-08T23:43:22.068761858Z" level=info msg="received exit event container_id:\"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e\" id:\"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e\" pid:3148 exited_at:{seconds:1757375002 nanos:56603397}" Sep 8 23:43:22.069830 containerd[1521]: time="2025-09-08T23:43:22.069794966Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e\" id:\"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e\" pid:3148 exited_at:{seconds:1757375002 nanos:56603397}" Sep 8 23:43:22.503735 kubelet[2650]: E0908 23:43:22.503534 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:22.509489 containerd[1521]: time="2025-09-08T23:43:22.509442155Z" level=info msg="CreateContainer within sandbox \"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:43:22.518228 containerd[1521]: time="2025-09-08T23:43:22.518192215Z" level=info msg="Container 33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:43:22.526675 containerd[1521]: time="2025-09-08T23:43:22.526635479Z" level=info msg="CreateContainer within sandbox \"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382\"" Sep 8 23:43:22.527220 containerd[1521]: time="2025-09-08T23:43:22.527150073Z" level=info msg="StartContainer for \"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382\"" Sep 8 23:43:22.529333 containerd[1521]: time="2025-09-08T23:43:22.528262980Z" level=info msg="connecting to shim 33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382" address="unix:///run/containerd/s/55cb6ffd01c06b4d89e8d1399a70eb7183f78855f58b2e81f58825133c8748ec" protocol=ttrpc version=3 Sep 8 23:43:22.550331 systemd[1]: Started cri-containerd-33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382.scope - libcontainer container 33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382. Sep 8 23:43:22.577799 containerd[1521]: time="2025-09-08T23:43:22.577475859Z" level=info msg="StartContainer for \"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382\" returns successfully" Sep 8 23:43:22.588440 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 8 23:43:22.588668 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:43:22.588830 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:43:22.590447 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 8 23:43:22.592634 systemd[1]: cri-containerd-33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382.scope: Deactivated successfully. Sep 8 23:43:22.593638 containerd[1521]: time="2025-09-08T23:43:22.593597595Z" level=info msg="TaskExit event in podsandbox handler container_id:\"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382\" id:\"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382\" pid:3193 exited_at:{seconds:1757375002 nanos:592464568}" Sep 8 23:43:22.594733 containerd[1521]: time="2025-09-08T23:43:22.594695783Z" level=info msg="received exit event container_id:\"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382\" id:\"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382\" pid:3193 exited_at:{seconds:1757375002 nanos:592464568}" Sep 8 23:43:22.620887 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 8 23:43:22.973281 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e-rootfs.mount: Deactivated successfully. Sep 8 23:43:23.511180 kubelet[2650]: E0908 23:43:23.511123 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:23.518418 containerd[1521]: time="2025-09-08T23:43:23.518374460Z" level=info msg="CreateContainer within sandbox \"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:43:23.541627 containerd[1521]: time="2025-09-08T23:43:23.541583368Z" level=info msg="Container 550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:43:23.549556 containerd[1521]: time="2025-09-08T23:43:23.549498762Z" level=info msg="CreateContainer within sandbox \"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749\"" Sep 8 23:43:23.550008 containerd[1521]: time="2025-09-08T23:43:23.549966557Z" level=info msg="StartContainer for \"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749\"" Sep 8 23:43:23.553079 containerd[1521]: time="2025-09-08T23:43:23.553048724Z" level=info msg="connecting to shim 550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749" address="unix:///run/containerd/s/55cb6ffd01c06b4d89e8d1399a70eb7183f78855f58b2e81f58825133c8748ec" protocol=ttrpc version=3 Sep 8 23:43:23.575489 systemd[1]: Started cri-containerd-550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749.scope - libcontainer container 550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749. Sep 8 23:43:23.632373 systemd[1]: cri-containerd-550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749.scope: Deactivated successfully. Sep 8 23:43:23.636640 containerd[1521]: time="2025-09-08T23:43:23.636587778Z" level=info msg="StartContainer for \"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749\" returns successfully" Sep 8 23:43:23.647033 containerd[1521]: time="2025-09-08T23:43:23.646959465Z" level=info msg="received exit event container_id:\"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749\" id:\"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749\" pid:3241 exited_at:{seconds:1757375003 nanos:644718889}" Sep 8 23:43:23.647201 containerd[1521]: time="2025-09-08T23:43:23.647015585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749\" id:\"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749\" pid:3241 exited_at:{seconds:1757375003 nanos:644718889}" Sep 8 23:43:23.973211 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749-rootfs.mount: Deactivated successfully. Sep 8 23:43:24.518308 kubelet[2650]: E0908 23:43:24.517317 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:24.524610 containerd[1521]: time="2025-09-08T23:43:24.524565577Z" level=info msg="CreateContainer within sandbox \"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:43:24.539080 containerd[1521]: time="2025-09-08T23:43:24.538985309Z" level=info msg="Container 482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:43:24.553665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4291562571.mount: Deactivated successfully. Sep 8 23:43:24.559388 containerd[1521]: time="2025-09-08T23:43:24.559343098Z" level=info msg="CreateContainer within sandbox \"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb\"" Sep 8 23:43:24.560362 containerd[1521]: time="2025-09-08T23:43:24.560326528Z" level=info msg="StartContainer for \"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb\"" Sep 8 23:43:24.561502 containerd[1521]: time="2025-09-08T23:43:24.561460196Z" level=info msg="connecting to shim 482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb" address="unix:///run/containerd/s/55cb6ffd01c06b4d89e8d1399a70eb7183f78855f58b2e81f58825133c8748ec" protocol=ttrpc version=3 Sep 8 23:43:24.594415 systemd[1]: Started cri-containerd-482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb.scope - libcontainer container 482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb. Sep 8 23:43:24.631956 systemd[1]: cri-containerd-482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb.scope: Deactivated successfully. Sep 8 23:43:24.635996 containerd[1521]: time="2025-09-08T23:43:24.635955107Z" level=info msg="TaskExit event in podsandbox handler container_id:\"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb\" id:\"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb\" pid:3280 exited_at:{seconds:1757375004 nanos:634589041}" Sep 8 23:43:24.640238 containerd[1521]: time="2025-09-08T23:43:24.638219324Z" level=info msg="received exit event container_id:\"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb\" id:\"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb\" pid:3280 exited_at:{seconds:1757375004 nanos:634589041}" Sep 8 23:43:24.648328 containerd[1521]: time="2025-09-08T23:43:24.648221420Z" level=info msg="StartContainer for \"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb\" returns successfully" Sep 8 23:43:24.663071 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb-rootfs.mount: Deactivated successfully. Sep 8 23:43:24.670357 containerd[1521]: time="2025-09-08T23:43:24.637458411Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3aab31a6_a275_4dfa_bd80_dbb77785a728.slice/cri-containerd-482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb.scope/memory.events\": no such file or directory" Sep 8 23:43:25.525980 kubelet[2650]: E0908 23:43:25.525948 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:25.529729 containerd[1521]: time="2025-09-08T23:43:25.529695293Z" level=info msg="CreateContainer within sandbox \"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:43:25.545350 containerd[1521]: time="2025-09-08T23:43:25.545311099Z" level=info msg="Container b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:43:25.551773 containerd[1521]: time="2025-09-08T23:43:25.551717036Z" level=info msg="CreateContainer within sandbox \"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\"" Sep 8 23:43:25.552461 containerd[1521]: time="2025-09-08T23:43:25.552432549Z" level=info msg="StartContainer for \"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\"" Sep 8 23:43:25.554707 containerd[1521]: time="2025-09-08T23:43:25.554061013Z" level=info msg="connecting to shim b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008" address="unix:///run/containerd/s/55cb6ffd01c06b4d89e8d1399a70eb7183f78855f58b2e81f58825133c8748ec" protocol=ttrpc version=3 Sep 8 23:43:25.584350 systemd[1]: Started cri-containerd-b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008.scope - libcontainer container b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008. Sep 8 23:43:25.612977 containerd[1521]: time="2025-09-08T23:43:25.612892114Z" level=info msg="StartContainer for \"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\" returns successfully" Sep 8 23:43:25.695992 containerd[1521]: time="2025-09-08T23:43:25.695955017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\" id:\"fce0d24a9bc177ce624dfebda6a585dec4182360558d29eebda0e39cbf704ba0\" pid:3348 exited_at:{seconds:1757375005 nanos:695708459}" Sep 8 23:43:25.698834 kubelet[2650]: I0908 23:43:25.698804 2650 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 8 23:43:25.750428 systemd[1]: Created slice kubepods-burstable-pod2dcc8d14_e5c9_47fd_8578_c16b48bd0d01.slice - libcontainer container kubepods-burstable-pod2dcc8d14_e5c9_47fd_8578_c16b48bd0d01.slice. Sep 8 23:43:25.757019 systemd[1]: Created slice kubepods-burstable-pod302892a6_1f0b_4a50_b1f5_9c7393386e20.slice - libcontainer container kubepods-burstable-pod302892a6_1f0b_4a50_b1f5_9c7393386e20.slice. Sep 8 23:43:25.765211 kubelet[2650]: I0908 23:43:25.764639 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdprt\" (UniqueName: \"kubernetes.io/projected/2dcc8d14-e5c9-47fd-8578-c16b48bd0d01-kube-api-access-wdprt\") pod \"coredns-674b8bbfcf-r6t4k\" (UID: \"2dcc8d14-e5c9-47fd-8578-c16b48bd0d01\") " pod="kube-system/coredns-674b8bbfcf-r6t4k" Sep 8 23:43:25.765211 kubelet[2650]: I0908 23:43:25.764686 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2dcc8d14-e5c9-47fd-8578-c16b48bd0d01-config-volume\") pod \"coredns-674b8bbfcf-r6t4k\" (UID: \"2dcc8d14-e5c9-47fd-8578-c16b48bd0d01\") " pod="kube-system/coredns-674b8bbfcf-r6t4k" Sep 8 23:43:25.865417 kubelet[2650]: I0908 23:43:25.865285 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssrfb\" (UniqueName: \"kubernetes.io/projected/302892a6-1f0b-4a50-b1f5-9c7393386e20-kube-api-access-ssrfb\") pod \"coredns-674b8bbfcf-n78lz\" (UID: \"302892a6-1f0b-4a50-b1f5-9c7393386e20\") " pod="kube-system/coredns-674b8bbfcf-n78lz" Sep 8 23:43:25.865417 kubelet[2650]: I0908 23:43:25.865346 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/302892a6-1f0b-4a50-b1f5-9c7393386e20-config-volume\") pod \"coredns-674b8bbfcf-n78lz\" (UID: \"302892a6-1f0b-4a50-b1f5-9c7393386e20\") " pod="kube-system/coredns-674b8bbfcf-n78lz" Sep 8 23:43:26.055274 kubelet[2650]: E0908 23:43:26.055187 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:26.056356 containerd[1521]: time="2025-09-08T23:43:26.056311175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r6t4k,Uid:2dcc8d14-e5c9-47fd-8578-c16b48bd0d01,Namespace:kube-system,Attempt:0,}" Sep 8 23:43:26.060187 kubelet[2650]: E0908 23:43:26.060135 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:26.062947 containerd[1521]: time="2025-09-08T23:43:26.062795314Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n78lz,Uid:302892a6-1f0b-4a50-b1f5-9c7393386e20,Namespace:kube-system,Attempt:0,}" Sep 8 23:43:26.535212 kubelet[2650]: E0908 23:43:26.534817 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:27.538401 kubelet[2650]: E0908 23:43:27.537792 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:27.687103 systemd-networkd[1434]: cilium_host: Link UP Sep 8 23:43:27.687282 systemd-networkd[1434]: cilium_net: Link UP Sep 8 23:43:27.687421 systemd-networkd[1434]: cilium_net: Gained carrier Sep 8 23:43:27.687545 systemd-networkd[1434]: cilium_host: Gained carrier Sep 8 23:43:27.777245 systemd-networkd[1434]: cilium_vxlan: Link UP Sep 8 23:43:27.777252 systemd-networkd[1434]: cilium_vxlan: Gained carrier Sep 8 23:43:27.831379 systemd-networkd[1434]: cilium_net: Gained IPv6LL Sep 8 23:43:28.032195 kernel: NET: Registered PF_ALG protocol family Sep 8 23:43:28.288416 systemd-networkd[1434]: cilium_host: Gained IPv6LL Sep 8 23:43:28.535731 kubelet[2650]: E0908 23:43:28.535702 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:28.605424 systemd-networkd[1434]: lxc_health: Link UP Sep 8 23:43:28.606827 systemd-networkd[1434]: lxc_health: Gained carrier Sep 8 23:43:28.991642 systemd-networkd[1434]: cilium_vxlan: Gained IPv6LL Sep 8 23:43:29.107977 systemd-networkd[1434]: lxc7cef327f6984: Link UP Sep 8 23:43:29.108210 kernel: eth0: renamed from tmpa791f Sep 8 23:43:29.109232 systemd-networkd[1434]: lxc7cef327f6984: Gained carrier Sep 8 23:43:29.122401 systemd-networkd[1434]: lxc7042cf87d33e: Link UP Sep 8 23:43:29.125181 kernel: eth0: renamed from tmp31f33 Sep 8 23:43:29.129100 systemd-networkd[1434]: lxc7042cf87d33e: Gained carrier Sep 8 23:43:30.143321 systemd-networkd[1434]: lxc_health: Gained IPv6LL Sep 8 23:43:30.332045 kubelet[2650]: E0908 23:43:30.331684 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:30.359107 kubelet[2650]: I0908 23:43:30.357280 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-c5kmx" podStartSLOduration=9.803610787 podStartE2EDuration="19.357260771s" podCreationTimestamp="2025-09-08 23:43:11 +0000 UTC" firstStartedPulling="2025-09-08 23:43:12.409705138 +0000 UTC m=+6.065768170" lastFinishedPulling="2025-09-08 23:43:21.963355122 +0000 UTC m=+15.619418154" observedRunningTime="2025-09-08 23:43:26.579093987 +0000 UTC m=+20.235157059" watchObservedRunningTime="2025-09-08 23:43:30.357260771 +0000 UTC m=+24.013323803" Sep 8 23:43:30.464293 systemd-networkd[1434]: lxc7cef327f6984: Gained IPv6LL Sep 8 23:43:31.103294 systemd-networkd[1434]: lxc7042cf87d33e: Gained IPv6LL Sep 8 23:43:32.603667 containerd[1521]: time="2025-09-08T23:43:32.601733902Z" level=info msg="connecting to shim a791f0c06ccc3145fa503631a6a6572a8597be7b101d7639c6f01340724873dc" address="unix:///run/containerd/s/f2eaad4144f923886a7e1822590e064b2a67ea7babf8070ad9336ead6ebda58c" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:43:32.604151 containerd[1521]: time="2025-09-08T23:43:32.604116565Z" level=info msg="connecting to shim 31f338b8e350b09ebfde5839cd0244bf1473b1680d5a15be2881f2fde2db5563" address="unix:///run/containerd/s/9a4ffff273bd59151c291e35806409e6fc226769e27e8a877ea9900457eebb48" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:43:32.628324 systemd[1]: Started cri-containerd-31f338b8e350b09ebfde5839cd0244bf1473b1680d5a15be2881f2fde2db5563.scope - libcontainer container 31f338b8e350b09ebfde5839cd0244bf1473b1680d5a15be2881f2fde2db5563. Sep 8 23:43:32.629459 systemd[1]: Started cri-containerd-a791f0c06ccc3145fa503631a6a6572a8597be7b101d7639c6f01340724873dc.scope - libcontainer container a791f0c06ccc3145fa503631a6a6572a8597be7b101d7639c6f01340724873dc. Sep 8 23:43:32.640354 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:43:32.641463 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 8 23:43:32.666861 containerd[1521]: time="2025-09-08T23:43:32.666814714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-r6t4k,Uid:2dcc8d14-e5c9-47fd-8578-c16b48bd0d01,Namespace:kube-system,Attempt:0,} returns sandbox id \"a791f0c06ccc3145fa503631a6a6572a8597be7b101d7639c6f01340724873dc\"" Sep 8 23:43:32.669376 kubelet[2650]: E0908 23:43:32.669342 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:32.671353 containerd[1521]: time="2025-09-08T23:43:32.671117243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-n78lz,Uid:302892a6-1f0b-4a50-b1f5-9c7393386e20,Namespace:kube-system,Attempt:0,} returns sandbox id \"31f338b8e350b09ebfde5839cd0244bf1473b1680d5a15be2881f2fde2db5563\"" Sep 8 23:43:32.672521 kubelet[2650]: E0908 23:43:32.672432 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:32.679257 containerd[1521]: time="2025-09-08T23:43:32.679220625Z" level=info msg="CreateContainer within sandbox \"31f338b8e350b09ebfde5839cd0244bf1473b1680d5a15be2881f2fde2db5563\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:43:32.680383 containerd[1521]: time="2025-09-08T23:43:32.680326417Z" level=info msg="CreateContainer within sandbox \"a791f0c06ccc3145fa503631a6a6572a8597be7b101d7639c6f01340724873dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 8 23:43:32.691767 containerd[1521]: time="2025-09-08T23:43:32.691735615Z" level=info msg="Container d8e44c700686ffc8993e6bd7a1509607b42abcccd8af7008c3ef09a2a1613869: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:43:32.692481 containerd[1521]: time="2025-09-08T23:43:32.692445970Z" level=info msg="Container 1373522e7ae839adbc5083e34a3f6bd789be6de6b810832a632de4648bfa6a1d: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:43:32.697284 containerd[1521]: time="2025-09-08T23:43:32.697152896Z" level=info msg="CreateContainer within sandbox \"a791f0c06ccc3145fa503631a6a6572a8597be7b101d7639c6f01340724873dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d8e44c700686ffc8993e6bd7a1509607b42abcccd8af7008c3ef09a2a1613869\"" Sep 8 23:43:32.698024 containerd[1521]: time="2025-09-08T23:43:32.698001890Z" level=info msg="StartContainer for \"d8e44c700686ffc8993e6bd7a1509607b42abcccd8af7008c3ef09a2a1613869\"" Sep 8 23:43:32.698937 containerd[1521]: time="2025-09-08T23:43:32.698898683Z" level=info msg="connecting to shim d8e44c700686ffc8993e6bd7a1509607b42abcccd8af7008c3ef09a2a1613869" address="unix:///run/containerd/s/f2eaad4144f923886a7e1822590e064b2a67ea7babf8070ad9336ead6ebda58c" protocol=ttrpc version=3 Sep 8 23:43:32.703295 containerd[1521]: time="2025-09-08T23:43:32.703254772Z" level=info msg="CreateContainer within sandbox \"31f338b8e350b09ebfde5839cd0244bf1473b1680d5a15be2881f2fde2db5563\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1373522e7ae839adbc5083e34a3f6bd789be6de6b810832a632de4648bfa6a1d\"" Sep 8 23:43:32.704010 containerd[1521]: time="2025-09-08T23:43:32.703977567Z" level=info msg="StartContainer for \"1373522e7ae839adbc5083e34a3f6bd789be6de6b810832a632de4648bfa6a1d\"" Sep 8 23:43:32.704982 containerd[1521]: time="2025-09-08T23:43:32.704955280Z" level=info msg="connecting to shim 1373522e7ae839adbc5083e34a3f6bd789be6de6b810832a632de4648bfa6a1d" address="unix:///run/containerd/s/9a4ffff273bd59151c291e35806409e6fc226769e27e8a877ea9900457eebb48" protocol=ttrpc version=3 Sep 8 23:43:32.720315 systemd[1]: Started cri-containerd-d8e44c700686ffc8993e6bd7a1509607b42abcccd8af7008c3ef09a2a1613869.scope - libcontainer container d8e44c700686ffc8993e6bd7a1509607b42abcccd8af7008c3ef09a2a1613869. Sep 8 23:43:32.723383 systemd[1]: Started cri-containerd-1373522e7ae839adbc5083e34a3f6bd789be6de6b810832a632de4648bfa6a1d.scope - libcontainer container 1373522e7ae839adbc5083e34a3f6bd789be6de6b810832a632de4648bfa6a1d. Sep 8 23:43:32.763776 containerd[1521]: time="2025-09-08T23:43:32.762945543Z" level=info msg="StartContainer for \"d8e44c700686ffc8993e6bd7a1509607b42abcccd8af7008c3ef09a2a1613869\" returns successfully" Sep 8 23:43:32.764127 containerd[1521]: time="2025-09-08T23:43:32.764091694Z" level=info msg="StartContainer for \"1373522e7ae839adbc5083e34a3f6bd789be6de6b810832a632de4648bfa6a1d\" returns successfully" Sep 8 23:43:33.547755 kubelet[2650]: E0908 23:43:33.547359 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:33.551437 kubelet[2650]: E0908 23:43:33.551410 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:33.560491 kubelet[2650]: I0908 23:43:33.560430 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-n78lz" podStartSLOduration=22.560416406999998 podStartE2EDuration="22.560416407s" podCreationTimestamp="2025-09-08 23:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:43:33.559430294 +0000 UTC m=+27.215493286" watchObservedRunningTime="2025-09-08 23:43:33.560416407 +0000 UTC m=+27.216479439" Sep 8 23:43:33.572280 kubelet[2650]: I0908 23:43:33.571892 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-r6t4k" podStartSLOduration=22.571873328 podStartE2EDuration="22.571873328s" podCreationTimestamp="2025-09-08 23:43:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:43:33.571294852 +0000 UTC m=+27.227357844" watchObservedRunningTime="2025-09-08 23:43:33.571873328 +0000 UTC m=+27.227936360" Sep 8 23:43:33.597076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1871067017.mount: Deactivated successfully. Sep 8 23:43:34.554316 kubelet[2650]: E0908 23:43:34.554251 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:34.554957 kubelet[2650]: E0908 23:43:34.554674 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:34.799688 systemd[1]: Started sshd@7-10.0.0.77:22-10.0.0.1:57332.service - OpenSSH per-connection server daemon (10.0.0.1:57332). Sep 8 23:43:34.857761 sshd[4006]: Accepted publickey for core from 10.0.0.1 port 57332 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:43:34.859047 sshd-session[4006]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:43:34.862909 systemd-logind[1491]: New session 8 of user core. Sep 8 23:43:34.870412 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 8 23:43:35.006659 sshd[4008]: Connection closed by 10.0.0.1 port 57332 Sep 8 23:43:35.007193 sshd-session[4006]: pam_unix(sshd:session): session closed for user core Sep 8 23:43:35.011090 systemd[1]: sshd@7-10.0.0.77:22-10.0.0.1:57332.service: Deactivated successfully. Sep 8 23:43:35.012862 systemd[1]: session-8.scope: Deactivated successfully. Sep 8 23:43:35.013678 systemd-logind[1491]: Session 8 logged out. Waiting for processes to exit. Sep 8 23:43:35.014713 systemd-logind[1491]: Removed session 8. Sep 8 23:43:35.557988 kubelet[2650]: E0908 23:43:35.557957 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:35.558351 kubelet[2650]: E0908 23:43:35.558002 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:40.025419 systemd[1]: Started sshd@8-10.0.0.77:22-10.0.0.1:53556.service - OpenSSH per-connection server daemon (10.0.0.1:53556). Sep 8 23:43:40.086066 sshd[4026]: Accepted publickey for core from 10.0.0.1 port 53556 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:43:40.088041 sshd-session[4026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:43:40.094551 systemd-logind[1491]: New session 9 of user core. Sep 8 23:43:40.100342 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 8 23:43:40.249686 sshd[4028]: Connection closed by 10.0.0.1 port 53556 Sep 8 23:43:40.250543 sshd-session[4026]: pam_unix(sshd:session): session closed for user core Sep 8 23:43:40.254980 systemd[1]: sshd@8-10.0.0.77:22-10.0.0.1:53556.service: Deactivated successfully. Sep 8 23:43:40.258039 systemd[1]: session-9.scope: Deactivated successfully. Sep 8 23:43:40.261341 systemd-logind[1491]: Session 9 logged out. Waiting for processes to exit. Sep 8 23:43:40.264834 systemd-logind[1491]: Removed session 9. Sep 8 23:43:40.782813 kubelet[2650]: I0908 23:43:40.782735 2650 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Sep 8 23:43:40.783635 kubelet[2650]: E0908 23:43:40.783609 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:41.565733 kubelet[2650]: E0908 23:43:41.565701 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:43:45.257187 systemd[1]: Started sshd@9-10.0.0.77:22-10.0.0.1:53558.service - OpenSSH per-connection server daemon (10.0.0.1:53558). Sep 8 23:43:45.335563 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 53558 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:43:45.338386 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:43:45.344521 systemd-logind[1491]: New session 10 of user core. Sep 8 23:43:45.354321 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 8 23:43:45.469865 sshd[4049]: Connection closed by 10.0.0.1 port 53558 Sep 8 23:43:45.470642 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Sep 8 23:43:45.475197 systemd-logind[1491]: Session 10 logged out. Waiting for processes to exit. Sep 8 23:43:45.475428 systemd[1]: sshd@9-10.0.0.77:22-10.0.0.1:53558.service: Deactivated successfully. Sep 8 23:43:45.477130 systemd[1]: session-10.scope: Deactivated successfully. Sep 8 23:43:45.479234 systemd-logind[1491]: Removed session 10. Sep 8 23:43:50.484785 systemd[1]: Started sshd@10-10.0.0.77:22-10.0.0.1:46766.service - OpenSSH per-connection server daemon (10.0.0.1:46766). Sep 8 23:43:50.548255 sshd[4064]: Accepted publickey for core from 10.0.0.1 port 46766 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:43:50.549488 sshd-session[4064]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:43:50.553455 systemd-logind[1491]: New session 11 of user core. Sep 8 23:43:50.564327 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 8 23:43:50.689675 sshd[4066]: Connection closed by 10.0.0.1 port 46766 Sep 8 23:43:50.688881 sshd-session[4064]: pam_unix(sshd:session): session closed for user core Sep 8 23:43:50.697485 systemd[1]: sshd@10-10.0.0.77:22-10.0.0.1:46766.service: Deactivated successfully. Sep 8 23:43:50.699072 systemd[1]: session-11.scope: Deactivated successfully. Sep 8 23:43:50.700628 systemd-logind[1491]: Session 11 logged out. Waiting for processes to exit. Sep 8 23:43:50.702085 systemd[1]: Started sshd@11-10.0.0.77:22-10.0.0.1:46772.service - OpenSSH per-connection server daemon (10.0.0.1:46772). Sep 8 23:43:50.703358 systemd-logind[1491]: Removed session 11. Sep 8 23:43:50.757803 sshd[4080]: Accepted publickey for core from 10.0.0.1 port 46772 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:43:50.758213 sshd-session[4080]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:43:50.764795 systemd-logind[1491]: New session 12 of user core. Sep 8 23:43:50.775524 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 8 23:43:50.917334 sshd[4082]: Connection closed by 10.0.0.1 port 46772 Sep 8 23:43:50.917892 sshd-session[4080]: pam_unix(sshd:session): session closed for user core Sep 8 23:43:50.928725 systemd[1]: sshd@11-10.0.0.77:22-10.0.0.1:46772.service: Deactivated successfully. Sep 8 23:43:50.930804 systemd[1]: session-12.scope: Deactivated successfully. Sep 8 23:43:50.932198 systemd-logind[1491]: Session 12 logged out. Waiting for processes to exit. Sep 8 23:43:50.937104 systemd[1]: Started sshd@12-10.0.0.77:22-10.0.0.1:46774.service - OpenSSH per-connection server daemon (10.0.0.1:46774). Sep 8 23:43:50.939541 systemd-logind[1491]: Removed session 12. Sep 8 23:43:50.988771 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 46774 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:43:50.990114 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:43:50.994008 systemd-logind[1491]: New session 13 of user core. Sep 8 23:43:51.002332 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 8 23:43:51.112080 sshd[4095]: Connection closed by 10.0.0.1 port 46774 Sep 8 23:43:51.112133 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Sep 8 23:43:51.115986 systemd[1]: sshd@12-10.0.0.77:22-10.0.0.1:46774.service: Deactivated successfully. Sep 8 23:43:51.118827 systemd[1]: session-13.scope: Deactivated successfully. Sep 8 23:43:51.121614 systemd-logind[1491]: Session 13 logged out. Waiting for processes to exit. Sep 8 23:43:51.123152 systemd-logind[1491]: Removed session 13. Sep 8 23:43:56.123867 systemd[1]: Started sshd@13-10.0.0.77:22-10.0.0.1:46778.service - OpenSSH per-connection server daemon (10.0.0.1:46778). Sep 8 23:43:56.180754 sshd[4110]: Accepted publickey for core from 10.0.0.1 port 46778 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:43:56.182214 sshd-session[4110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:43:56.186088 systemd-logind[1491]: New session 14 of user core. Sep 8 23:43:56.203346 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 8 23:43:56.324859 sshd[4112]: Connection closed by 10.0.0.1 port 46778 Sep 8 23:43:56.325145 sshd-session[4110]: pam_unix(sshd:session): session closed for user core Sep 8 23:43:56.329623 systemd[1]: sshd@13-10.0.0.77:22-10.0.0.1:46778.service: Deactivated successfully. Sep 8 23:43:56.332010 systemd[1]: session-14.scope: Deactivated successfully. Sep 8 23:43:56.333317 systemd-logind[1491]: Session 14 logged out. Waiting for processes to exit. Sep 8 23:43:56.334443 systemd-logind[1491]: Removed session 14. Sep 8 23:44:01.349520 systemd[1]: Started sshd@14-10.0.0.77:22-10.0.0.1:45512.service - OpenSSH per-connection server daemon (10.0.0.1:45512). Sep 8 23:44:01.395721 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 45512 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:44:01.396831 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:44:01.400479 systemd-logind[1491]: New session 15 of user core. Sep 8 23:44:01.410309 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 8 23:44:01.517234 sshd[4127]: Connection closed by 10.0.0.1 port 45512 Sep 8 23:44:01.517914 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Sep 8 23:44:01.531702 systemd[1]: sshd@14-10.0.0.77:22-10.0.0.1:45512.service: Deactivated successfully. Sep 8 23:44:01.534574 systemd[1]: session-15.scope: Deactivated successfully. Sep 8 23:44:01.535185 systemd-logind[1491]: Session 15 logged out. Waiting for processes to exit. Sep 8 23:44:01.537603 systemd[1]: Started sshd@15-10.0.0.77:22-10.0.0.1:45528.service - OpenSSH per-connection server daemon (10.0.0.1:45528). Sep 8 23:44:01.538247 systemd-logind[1491]: Removed session 15. Sep 8 23:44:01.594761 sshd[4141]: Accepted publickey for core from 10.0.0.1 port 45528 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:44:01.595996 sshd-session[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:44:01.599989 systemd-logind[1491]: New session 16 of user core. Sep 8 23:44:01.607310 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 8 23:44:01.803073 sshd[4143]: Connection closed by 10.0.0.1 port 45528 Sep 8 23:44:01.803737 sshd-session[4141]: pam_unix(sshd:session): session closed for user core Sep 8 23:44:01.816532 systemd[1]: sshd@15-10.0.0.77:22-10.0.0.1:45528.service: Deactivated successfully. Sep 8 23:44:01.818107 systemd[1]: session-16.scope: Deactivated successfully. Sep 8 23:44:01.818760 systemd-logind[1491]: Session 16 logged out. Waiting for processes to exit. Sep 8 23:44:01.821236 systemd[1]: Started sshd@16-10.0.0.77:22-10.0.0.1:45530.service - OpenSSH per-connection server daemon (10.0.0.1:45530). Sep 8 23:44:01.821845 systemd-logind[1491]: Removed session 16. Sep 8 23:44:01.863543 sshd[4155]: Accepted publickey for core from 10.0.0.1 port 45530 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:44:01.864748 sshd-session[4155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:44:01.869242 systemd-logind[1491]: New session 17 of user core. Sep 8 23:44:01.881338 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 8 23:44:02.486414 sshd[4157]: Connection closed by 10.0.0.1 port 45530 Sep 8 23:44:02.486960 sshd-session[4155]: pam_unix(sshd:session): session closed for user core Sep 8 23:44:02.499136 systemd[1]: sshd@16-10.0.0.77:22-10.0.0.1:45530.service: Deactivated successfully. Sep 8 23:44:02.500877 systemd[1]: session-17.scope: Deactivated successfully. Sep 8 23:44:02.504172 systemd-logind[1491]: Session 17 logged out. Waiting for processes to exit. Sep 8 23:44:02.507604 systemd-logind[1491]: Removed session 17. Sep 8 23:44:02.510672 systemd[1]: Started sshd@17-10.0.0.77:22-10.0.0.1:45538.service - OpenSSH per-connection server daemon (10.0.0.1:45538). Sep 8 23:44:02.566468 sshd[4178]: Accepted publickey for core from 10.0.0.1 port 45538 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:44:02.568068 sshd-session[4178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:44:02.572652 systemd-logind[1491]: New session 18 of user core. Sep 8 23:44:02.589387 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 8 23:44:02.826186 sshd[4180]: Connection closed by 10.0.0.1 port 45538 Sep 8 23:44:02.826032 sshd-session[4178]: pam_unix(sshd:session): session closed for user core Sep 8 23:44:02.835737 systemd[1]: sshd@17-10.0.0.77:22-10.0.0.1:45538.service: Deactivated successfully. Sep 8 23:44:02.838753 systemd[1]: session-18.scope: Deactivated successfully. Sep 8 23:44:02.840476 systemd-logind[1491]: Session 18 logged out. Waiting for processes to exit. Sep 8 23:44:02.843821 systemd[1]: Started sshd@18-10.0.0.77:22-10.0.0.1:45546.service - OpenSSH per-connection server daemon (10.0.0.1:45546). Sep 8 23:44:02.844539 systemd-logind[1491]: Removed session 18. Sep 8 23:44:02.896130 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 45546 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:44:02.897478 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:44:02.902232 systemd-logind[1491]: New session 19 of user core. Sep 8 23:44:02.906344 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 8 23:44:03.022655 sshd[4195]: Connection closed by 10.0.0.1 port 45546 Sep 8 23:44:03.022994 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Sep 8 23:44:03.026725 systemd[1]: sshd@18-10.0.0.77:22-10.0.0.1:45546.service: Deactivated successfully. Sep 8 23:44:03.028941 systemd[1]: session-19.scope: Deactivated successfully. Sep 8 23:44:03.030836 systemd-logind[1491]: Session 19 logged out. Waiting for processes to exit. Sep 8 23:44:03.032475 systemd-logind[1491]: Removed session 19. Sep 8 23:44:08.039733 systemd[1]: Started sshd@19-10.0.0.77:22-10.0.0.1:45552.service - OpenSSH per-connection server daemon (10.0.0.1:45552). Sep 8 23:44:08.104238 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 45552 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:44:08.105648 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:44:08.111105 systemd-logind[1491]: New session 20 of user core. Sep 8 23:44:08.118354 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 8 23:44:08.239448 sshd[4215]: Connection closed by 10.0.0.1 port 45552 Sep 8 23:44:08.239794 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Sep 8 23:44:08.244234 systemd[1]: sshd@19-10.0.0.77:22-10.0.0.1:45552.service: Deactivated successfully. Sep 8 23:44:08.245958 systemd[1]: session-20.scope: Deactivated successfully. Sep 8 23:44:08.247310 systemd-logind[1491]: Session 20 logged out. Waiting for processes to exit. Sep 8 23:44:08.248669 systemd-logind[1491]: Removed session 20. Sep 8 23:44:13.252614 systemd[1]: Started sshd@20-10.0.0.77:22-10.0.0.1:36532.service - OpenSSH per-connection server daemon (10.0.0.1:36532). Sep 8 23:44:13.305589 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 36532 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:44:13.306799 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:44:13.310415 systemd-logind[1491]: New session 21 of user core. Sep 8 23:44:13.315308 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 8 23:44:13.428198 sshd[4233]: Connection closed by 10.0.0.1 port 36532 Sep 8 23:44:13.428402 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Sep 8 23:44:13.431723 systemd[1]: sshd@20-10.0.0.77:22-10.0.0.1:36532.service: Deactivated successfully. Sep 8 23:44:13.433476 systemd[1]: session-21.scope: Deactivated successfully. Sep 8 23:44:13.434101 systemd-logind[1491]: Session 21 logged out. Waiting for processes to exit. Sep 8 23:44:13.435607 systemd-logind[1491]: Removed session 21. Sep 8 23:44:18.444862 systemd[1]: Started sshd@21-10.0.0.77:22-10.0.0.1:36540.service - OpenSSH per-connection server daemon (10.0.0.1:36540). Sep 8 23:44:18.509813 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 36540 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:44:18.511251 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:44:18.517490 systemd-logind[1491]: New session 22 of user core. Sep 8 23:44:18.524352 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 8 23:44:18.633575 sshd[4249]: Connection closed by 10.0.0.1 port 36540 Sep 8 23:44:18.635098 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Sep 8 23:44:18.645675 systemd[1]: sshd@21-10.0.0.77:22-10.0.0.1:36540.service: Deactivated successfully. Sep 8 23:44:18.647312 systemd[1]: session-22.scope: Deactivated successfully. Sep 8 23:44:18.648215 systemd-logind[1491]: Session 22 logged out. Waiting for processes to exit. Sep 8 23:44:18.651664 systemd[1]: Started sshd@22-10.0.0.77:22-10.0.0.1:36550.service - OpenSSH per-connection server daemon (10.0.0.1:36550). Sep 8 23:44:18.653056 systemd-logind[1491]: Removed session 22. Sep 8 23:44:18.703541 sshd[4262]: Accepted publickey for core from 10.0.0.1 port 36550 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:44:18.705019 sshd-session[4262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:44:18.709962 systemd-logind[1491]: New session 23 of user core. Sep 8 23:44:18.725357 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 8 23:44:20.440328 kubelet[2650]: E0908 23:44:20.440239 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:44:20.440695 kubelet[2650]: E0908 23:44:20.440668 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:44:20.958419 containerd[1521]: time="2025-09-08T23:44:20.957688413Z" level=info msg="StopContainer for \"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\" with timeout 30 (s)" Sep 8 23:44:20.959069 containerd[1521]: time="2025-09-08T23:44:20.958943627Z" level=info msg="Stop container \"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\" with signal terminated" Sep 8 23:44:20.975373 systemd[1]: cri-containerd-44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793.scope: Deactivated successfully. Sep 8 23:44:20.987048 containerd[1521]: time="2025-09-08T23:44:20.986997536Z" level=info msg="received exit event container_id:\"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\" id:\"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\" pid:3069 exited_at:{seconds:1757375060 nanos:986678093}" Sep 8 23:44:20.987374 containerd[1521]: time="2025-09-08T23:44:20.987194458Z" level=info msg="TaskExit event in podsandbox handler container_id:\"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\" id:\"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\" pid:3069 exited_at:{seconds:1757375060 nanos:986678093}" Sep 8 23:44:21.012300 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793-rootfs.mount: Deactivated successfully. Sep 8 23:44:21.023304 containerd[1521]: time="2025-09-08T23:44:21.023255688Z" level=info msg="StopContainer for \"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\" returns successfully" Sep 8 23:44:21.026744 containerd[1521]: time="2025-09-08T23:44:21.026697965Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 8 23:44:21.027611 containerd[1521]: time="2025-09-08T23:44:21.027562974Z" level=info msg="StopPodSandbox for \"2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33\"" Sep 8 23:44:21.030769 containerd[1521]: time="2025-09-08T23:44:21.030742728Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\" id:\"8e691bfad7b1f7ab79399bae20ef50d26167df10aac6154d0e432eab8a33ba40\" pid:4305 exited_at:{seconds:1757375061 nanos:30458085}" Sep 8 23:44:21.032571 containerd[1521]: time="2025-09-08T23:44:21.032521387Z" level=info msg="StopContainer for \"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\" with timeout 2 (s)" Sep 8 23:44:21.033293 containerd[1521]: time="2025-09-08T23:44:21.033219555Z" level=info msg="Stop container \"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\" with signal terminated" Sep 8 23:44:21.034432 containerd[1521]: time="2025-09-08T23:44:21.034203445Z" level=info msg="Container to stop \"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:44:21.042306 systemd-networkd[1434]: lxc_health: Link DOWN Sep 8 23:44:21.042314 systemd-networkd[1434]: lxc_health: Lost carrier Sep 8 23:44:21.047567 systemd[1]: cri-containerd-2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33.scope: Deactivated successfully. Sep 8 23:44:21.050305 containerd[1521]: time="2025-09-08T23:44:21.050274857Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33\" id:\"2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33\" pid:2768 exit_status:137 exited_at:{seconds:1757375061 nanos:49875852}" Sep 8 23:44:21.061464 systemd[1]: cri-containerd-b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008.scope: Deactivated successfully. Sep 8 23:44:21.061970 systemd[1]: cri-containerd-b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008.scope: Consumed 6.123s CPU time, 126.8M memory peak, 128K read from disk, 14.2M written to disk. Sep 8 23:44:21.062328 containerd[1521]: time="2025-09-08T23:44:21.062291625Z" level=info msg="received exit event container_id:\"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\" id:\"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\" pid:3318 exited_at:{seconds:1757375061 nanos:61940661}" Sep 8 23:44:21.082025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33-rootfs.mount: Deactivated successfully. Sep 8 23:44:21.086532 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008-rootfs.mount: Deactivated successfully. Sep 8 23:44:21.091151 containerd[1521]: time="2025-09-08T23:44:21.090975371Z" level=info msg="shim disconnected" id=2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33 namespace=k8s.io Sep 8 23:44:21.103671 containerd[1521]: time="2025-09-08T23:44:21.091007651Z" level=warning msg="cleaning up after shim disconnected" id=2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33 namespace=k8s.io Sep 8 23:44:21.103671 containerd[1521]: time="2025-09-08T23:44:21.103486944Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:44:21.103671 containerd[1521]: time="2025-09-08T23:44:21.094398647Z" level=info msg="StopContainer for \"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\" returns successfully" Sep 8 23:44:21.104207 containerd[1521]: time="2025-09-08T23:44:21.104183872Z" level=info msg="StopPodSandbox for \"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\"" Sep 8 23:44:21.104272 containerd[1521]: time="2025-09-08T23:44:21.104255913Z" level=info msg="Container to stop \"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:44:21.104302 containerd[1521]: time="2025-09-08T23:44:21.104272673Z" level=info msg="Container to stop \"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:44:21.104302 containerd[1521]: time="2025-09-08T23:44:21.104282113Z" level=info msg="Container to stop \"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:44:21.104302 containerd[1521]: time="2025-09-08T23:44:21.104291793Z" level=info msg="Container to stop \"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:44:21.104302 containerd[1521]: time="2025-09-08T23:44:21.104299593Z" level=info msg="Container to stop \"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 8 23:44:21.111904 systemd[1]: cri-containerd-731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6.scope: Deactivated successfully. Sep 8 23:44:21.127275 containerd[1521]: time="2025-09-08T23:44:21.127153997Z" level=info msg="received exit event sandbox_id:\"2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33\" exit_status:137 exited_at:{seconds:1757375061 nanos:49875852}" Sep 8 23:44:21.128181 containerd[1521]: time="2025-09-08T23:44:21.127975926Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\" id:\"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\" pid:3318 exited_at:{seconds:1757375061 nanos:61940661}" Sep 8 23:44:21.128271 containerd[1521]: time="2025-09-08T23:44:21.128251809Z" level=info msg="TaskExit event in podsandbox handler container_id:\"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\" id:\"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\" pid:2849 exit_status:137 exited_at:{seconds:1757375061 nanos:120419285}" Sep 8 23:44:21.129025 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33-shm.mount: Deactivated successfully. Sep 8 23:44:21.129771 containerd[1521]: time="2025-09-08T23:44:21.129740464Z" level=info msg="TearDown network for sandbox \"2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33\" successfully" Sep 8 23:44:21.129866 containerd[1521]: time="2025-09-08T23:44:21.129778025Z" level=info msg="StopPodSandbox for \"2afe4aa736c22ee7154bf243a24fa08f68541c5e6cd2a4b815f8a4387f84ff33\" returns successfully" Sep 8 23:44:21.146144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6-rootfs.mount: Deactivated successfully. Sep 8 23:44:21.154059 containerd[1521]: time="2025-09-08T23:44:21.154004483Z" level=info msg="shim disconnected" id=731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6 namespace=k8s.io Sep 8 23:44:21.154217 containerd[1521]: time="2025-09-08T23:44:21.154041684Z" level=warning msg="cleaning up after shim disconnected" id=731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6 namespace=k8s.io Sep 8 23:44:21.154217 containerd[1521]: time="2025-09-08T23:44:21.154074604Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 8 23:44:21.154520 containerd[1521]: time="2025-09-08T23:44:21.154263006Z" level=info msg="received exit event sandbox_id:\"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\" exit_status:137 exited_at:{seconds:1757375061 nanos:120419285}" Sep 8 23:44:21.154520 containerd[1521]: time="2025-09-08T23:44:21.154398528Z" level=info msg="TearDown network for sandbox \"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\" successfully" Sep 8 23:44:21.154520 containerd[1521]: time="2025-09-08T23:44:21.154422688Z" level=info msg="StopPodSandbox for \"731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6\" returns successfully" Sep 8 23:44:21.209217 kubelet[2650]: I0908 23:44:21.208985 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-host-proc-sys-kernel\") pod \"3aab31a6-a275-4dfa-bd80-dbb77785a728\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " Sep 8 23:44:21.209217 kubelet[2650]: I0908 23:44:21.209035 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3aab31a6-a275-4dfa-bd80-dbb77785a728-cilium-config-path\") pod \"3aab31a6-a275-4dfa-bd80-dbb77785a728\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " Sep 8 23:44:21.209217 kubelet[2650]: I0908 23:44:21.209053 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-bpf-maps\") pod \"3aab31a6-a275-4dfa-bd80-dbb77785a728\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " Sep 8 23:44:21.209217 kubelet[2650]: I0908 23:44:21.209074 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-xtables-lock\") pod \"3aab31a6-a275-4dfa-bd80-dbb77785a728\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " Sep 8 23:44:21.209217 kubelet[2650]: I0908 23:44:21.209094 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rfw88\" (UniqueName: \"kubernetes.io/projected/3aab31a6-a275-4dfa-bd80-dbb77785a728-kube-api-access-rfw88\") pod \"3aab31a6-a275-4dfa-bd80-dbb77785a728\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " Sep 8 23:44:21.209217 kubelet[2650]: I0908 23:44:21.209109 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-cilium-run\") pod \"3aab31a6-a275-4dfa-bd80-dbb77785a728\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " Sep 8 23:44:21.209586 kubelet[2650]: I0908 23:44:21.209123 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-hostproc\") pod \"3aab31a6-a275-4dfa-bd80-dbb77785a728\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " Sep 8 23:44:21.209586 kubelet[2650]: I0908 23:44:21.209137 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-cni-path\") pod \"3aab31a6-a275-4dfa-bd80-dbb77785a728\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " Sep 8 23:44:21.209586 kubelet[2650]: I0908 23:44:21.209168 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrgbj\" (UniqueName: \"kubernetes.io/projected/00476f70-efc5-41b2-a6c4-37af519991a0-kube-api-access-jrgbj\") pod \"00476f70-efc5-41b2-a6c4-37af519991a0\" (UID: \"00476f70-efc5-41b2-a6c4-37af519991a0\") " Sep 8 23:44:21.209586 kubelet[2650]: I0908 23:44:21.209185 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3aab31a6-a275-4dfa-bd80-dbb77785a728-hubble-tls\") pod \"3aab31a6-a275-4dfa-bd80-dbb77785a728\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " Sep 8 23:44:21.209586 kubelet[2650]: I0908 23:44:21.209204 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3aab31a6-a275-4dfa-bd80-dbb77785a728-clustermesh-secrets\") pod \"3aab31a6-a275-4dfa-bd80-dbb77785a728\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " Sep 8 23:44:21.209586 kubelet[2650]: I0908 23:44:21.209221 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-lib-modules\") pod \"3aab31a6-a275-4dfa-bd80-dbb77785a728\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " Sep 8 23:44:21.209710 kubelet[2650]: I0908 23:44:21.209239 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00476f70-efc5-41b2-a6c4-37af519991a0-cilium-config-path\") pod \"00476f70-efc5-41b2-a6c4-37af519991a0\" (UID: \"00476f70-efc5-41b2-a6c4-37af519991a0\") " Sep 8 23:44:21.209710 kubelet[2650]: I0908 23:44:21.209254 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-host-proc-sys-net\") pod \"3aab31a6-a275-4dfa-bd80-dbb77785a728\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " Sep 8 23:44:21.209710 kubelet[2650]: I0908 23:44:21.209276 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-etc-cni-netd\") pod \"3aab31a6-a275-4dfa-bd80-dbb77785a728\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " Sep 8 23:44:21.209710 kubelet[2650]: I0908 23:44:21.209289 2650 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-cilium-cgroup\") pod \"3aab31a6-a275-4dfa-bd80-dbb77785a728\" (UID: \"3aab31a6-a275-4dfa-bd80-dbb77785a728\") " Sep 8 23:44:21.211358 kubelet[2650]: I0908 23:44:21.211328 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3aab31a6-a275-4dfa-bd80-dbb77785a728" (UID: "3aab31a6-a275-4dfa-bd80-dbb77785a728"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:44:21.211557 kubelet[2650]: I0908 23:44:21.211336 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3aab31a6-a275-4dfa-bd80-dbb77785a728" (UID: "3aab31a6-a275-4dfa-bd80-dbb77785a728"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:44:21.212139 kubelet[2650]: I0908 23:44:21.212086 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3aab31a6-a275-4dfa-bd80-dbb77785a728" (UID: "3aab31a6-a275-4dfa-bd80-dbb77785a728"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:44:21.212139 kubelet[2650]: I0908 23:44:21.212127 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3aab31a6-a275-4dfa-bd80-dbb77785a728" (UID: "3aab31a6-a275-4dfa-bd80-dbb77785a728"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:44:21.212139 kubelet[2650]: I0908 23:44:21.212147 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-hostproc" (OuterVolumeSpecName: "hostproc") pod "3aab31a6-a275-4dfa-bd80-dbb77785a728" (UID: "3aab31a6-a275-4dfa-bd80-dbb77785a728"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:44:21.212388 kubelet[2650]: I0908 23:44:21.212224 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3aab31a6-a275-4dfa-bd80-dbb77785a728" (UID: "3aab31a6-a275-4dfa-bd80-dbb77785a728"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:44:21.212388 kubelet[2650]: I0908 23:44:21.212246 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-cni-path" (OuterVolumeSpecName: "cni-path") pod "3aab31a6-a275-4dfa-bd80-dbb77785a728" (UID: "3aab31a6-a275-4dfa-bd80-dbb77785a728"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:44:21.218702 kubelet[2650]: I0908 23:44:21.218468 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3aab31a6-a275-4dfa-bd80-dbb77785a728-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3aab31a6-a275-4dfa-bd80-dbb77785a728" (UID: "3aab31a6-a275-4dfa-bd80-dbb77785a728"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:44:21.219294 kubelet[2650]: I0908 23:44:21.219262 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aab31a6-a275-4dfa-bd80-dbb77785a728-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3aab31a6-a275-4dfa-bd80-dbb77785a728" (UID: "3aab31a6-a275-4dfa-bd80-dbb77785a728"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:44:21.219407 kubelet[2650]: I0908 23:44:21.219293 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3aab31a6-a275-4dfa-bd80-dbb77785a728-kube-api-access-rfw88" (OuterVolumeSpecName: "kube-api-access-rfw88") pod "3aab31a6-a275-4dfa-bd80-dbb77785a728" (UID: "3aab31a6-a275-4dfa-bd80-dbb77785a728"). InnerVolumeSpecName "kube-api-access-rfw88". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:44:21.219461 kubelet[2650]: I0908 23:44:21.219327 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3aab31a6-a275-4dfa-bd80-dbb77785a728" (UID: "3aab31a6-a275-4dfa-bd80-dbb77785a728"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:44:21.219514 kubelet[2650]: I0908 23:44:21.219343 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3aab31a6-a275-4dfa-bd80-dbb77785a728" (UID: "3aab31a6-a275-4dfa-bd80-dbb77785a728"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:44:21.219611 kubelet[2650]: I0908 23:44:21.219584 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3aab31a6-a275-4dfa-bd80-dbb77785a728" (UID: "3aab31a6-a275-4dfa-bd80-dbb77785a728"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 8 23:44:21.220101 kubelet[2650]: I0908 23:44:21.220069 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00476f70-efc5-41b2-a6c4-37af519991a0-kube-api-access-jrgbj" (OuterVolumeSpecName: "kube-api-access-jrgbj") pod "00476f70-efc5-41b2-a6c4-37af519991a0" (UID: "00476f70-efc5-41b2-a6c4-37af519991a0"). InnerVolumeSpecName "kube-api-access-jrgbj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 8 23:44:21.220274 kubelet[2650]: I0908 23:44:21.220245 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00476f70-efc5-41b2-a6c4-37af519991a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "00476f70-efc5-41b2-a6c4-37af519991a0" (UID: "00476f70-efc5-41b2-a6c4-37af519991a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 8 23:44:21.221369 kubelet[2650]: I0908 23:44:21.221341 2650 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3aab31a6-a275-4dfa-bd80-dbb77785a728-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3aab31a6-a275-4dfa-bd80-dbb77785a728" (UID: "3aab31a6-a275-4dfa-bd80-dbb77785a728"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 8 23:44:21.309859 kubelet[2650]: I0908 23:44:21.309767 2650 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3aab31a6-a275-4dfa-bd80-dbb77785a728-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.309859 kubelet[2650]: I0908 23:44:21.309829 2650 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.309859 kubelet[2650]: I0908 23:44:21.309840 2650 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.309859 kubelet[2650]: I0908 23:44:21.309853 2650 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rfw88\" (UniqueName: \"kubernetes.io/projected/3aab31a6-a275-4dfa-bd80-dbb77785a728-kube-api-access-rfw88\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.309859 kubelet[2650]: I0908 23:44:21.309877 2650 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.310082 kubelet[2650]: I0908 23:44:21.309885 2650 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.310082 kubelet[2650]: I0908 23:44:21.309893 2650 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.310082 kubelet[2650]: I0908 23:44:21.309901 2650 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jrgbj\" (UniqueName: \"kubernetes.io/projected/00476f70-efc5-41b2-a6c4-37af519991a0-kube-api-access-jrgbj\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.310082 kubelet[2650]: I0908 23:44:21.309911 2650 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3aab31a6-a275-4dfa-bd80-dbb77785a728-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.310082 kubelet[2650]: I0908 23:44:21.309918 2650 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3aab31a6-a275-4dfa-bd80-dbb77785a728-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.310082 kubelet[2650]: I0908 23:44:21.309925 2650 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.310082 kubelet[2650]: I0908 23:44:21.309933 2650 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00476f70-efc5-41b2-a6c4-37af519991a0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.310082 kubelet[2650]: I0908 23:44:21.309954 2650 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.310279 kubelet[2650]: I0908 23:44:21.309962 2650 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.310279 kubelet[2650]: I0908 23:44:21.309969 2650 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.310279 kubelet[2650]: I0908 23:44:21.309977 2650 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3aab31a6-a275-4dfa-bd80-dbb77785a728-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 8 23:44:21.502027 kubelet[2650]: E0908 23:44:21.501898 2650 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 8 23:44:21.656005 systemd[1]: Removed slice kubepods-burstable-pod3aab31a6_a275_4dfa_bd80_dbb77785a728.slice - libcontainer container kubepods-burstable-pod3aab31a6_a275_4dfa_bd80_dbb77785a728.slice. Sep 8 23:44:21.656115 systemd[1]: kubepods-burstable-pod3aab31a6_a275_4dfa_bd80_dbb77785a728.slice: Consumed 6.209s CPU time, 127.1M memory peak, 132K read from disk, 14.3M written to disk. Sep 8 23:44:21.658479 systemd[1]: Removed slice kubepods-besteffort-pod00476f70_efc5_41b2_a6c4_37af519991a0.slice - libcontainer container kubepods-besteffort-pod00476f70_efc5_41b2_a6c4_37af519991a0.slice. Sep 8 23:44:21.661440 kubelet[2650]: I0908 23:44:21.661286 2650 scope.go:117] "RemoveContainer" containerID="b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008" Sep 8 23:44:21.663367 containerd[1521]: time="2025-09-08T23:44:21.663335158Z" level=info msg="RemoveContainer for \"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\"" Sep 8 23:44:21.681423 containerd[1521]: time="2025-09-08T23:44:21.681230828Z" level=info msg="RemoveContainer for \"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\" returns successfully" Sep 8 23:44:21.682322 kubelet[2650]: I0908 23:44:21.681573 2650 scope.go:117] "RemoveContainer" containerID="482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb" Sep 8 23:44:21.685144 containerd[1521]: time="2025-09-08T23:44:21.685102310Z" level=info msg="RemoveContainer for \"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb\"" Sep 8 23:44:21.695538 containerd[1521]: time="2025-09-08T23:44:21.695498221Z" level=info msg="RemoveContainer for \"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb\" returns successfully" Sep 8 23:44:21.695985 kubelet[2650]: I0908 23:44:21.695949 2650 scope.go:117] "RemoveContainer" containerID="550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749" Sep 8 23:44:21.699798 containerd[1521]: time="2025-09-08T23:44:21.699751586Z" level=info msg="RemoveContainer for \"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749\"" Sep 8 23:44:21.707241 containerd[1521]: time="2025-09-08T23:44:21.707202586Z" level=info msg="RemoveContainer for \"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749\" returns successfully" Sep 8 23:44:21.707564 kubelet[2650]: I0908 23:44:21.707541 2650 scope.go:117] "RemoveContainer" containerID="33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382" Sep 8 23:44:21.709184 containerd[1521]: time="2025-09-08T23:44:21.708901324Z" level=info msg="RemoveContainer for \"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382\"" Sep 8 23:44:21.712002 containerd[1521]: time="2025-09-08T23:44:21.711974676Z" level=info msg="RemoveContainer for \"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382\" returns successfully" Sep 8 23:44:21.712420 kubelet[2650]: I0908 23:44:21.712394 2650 scope.go:117] "RemoveContainer" containerID="8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e" Sep 8 23:44:21.714010 containerd[1521]: time="2025-09-08T23:44:21.713972538Z" level=info msg="RemoveContainer for \"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e\"" Sep 8 23:44:21.717152 containerd[1521]: time="2025-09-08T23:44:21.717117531Z" level=info msg="RemoveContainer for \"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e\" returns successfully" Sep 8 23:44:21.717422 kubelet[2650]: I0908 23:44:21.717402 2650 scope.go:117] "RemoveContainer" containerID="b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008" Sep 8 23:44:21.717798 containerd[1521]: time="2025-09-08T23:44:21.717762698Z" level=error msg="ContainerStatus for \"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\": not found" Sep 8 23:44:21.721187 kubelet[2650]: E0908 23:44:21.721127 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\": not found" containerID="b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008" Sep 8 23:44:21.721262 kubelet[2650]: I0908 23:44:21.721205 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008"} err="failed to get container status \"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4b63458663fcdccb94bac77ec13321058032db608836111e57e1d296d694008\": not found" Sep 8 23:44:21.721262 kubelet[2650]: I0908 23:44:21.721248 2650 scope.go:117] "RemoveContainer" containerID="482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb" Sep 8 23:44:21.721718 containerd[1521]: time="2025-09-08T23:44:21.721612219Z" level=error msg="ContainerStatus for \"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb\": not found" Sep 8 23:44:21.721845 kubelet[2650]: E0908 23:44:21.721774 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb\": not found" containerID="482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb" Sep 8 23:44:21.721845 kubelet[2650]: I0908 23:44:21.721794 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb"} err="failed to get container status \"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"482f602586cbc9174ba2a43d85363621c5f49fa82a92fea13051a841d7f838bb\": not found" Sep 8 23:44:21.721845 kubelet[2650]: I0908 23:44:21.721809 2650 scope.go:117] "RemoveContainer" containerID="550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749" Sep 8 23:44:21.722083 containerd[1521]: time="2025-09-08T23:44:21.722049824Z" level=error msg="ContainerStatus for \"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749\": not found" Sep 8 23:44:21.722301 kubelet[2650]: E0908 23:44:21.722242 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749\": not found" containerID="550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749" Sep 8 23:44:21.722301 kubelet[2650]: I0908 23:44:21.722271 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749"} err="failed to get container status \"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749\": rpc error: code = NotFound desc = an error occurred when try to find container \"550349e748525c62ff131019308eac9897b1b19f516d3b863fef174fdf0fc749\": not found" Sep 8 23:44:21.722384 kubelet[2650]: I0908 23:44:21.722287 2650 scope.go:117] "RemoveContainer" containerID="33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382" Sep 8 23:44:21.722631 containerd[1521]: time="2025-09-08T23:44:21.722596470Z" level=error msg="ContainerStatus for \"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382\": not found" Sep 8 23:44:21.723097 kubelet[2650]: E0908 23:44:21.722973 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382\": not found" containerID="33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382" Sep 8 23:44:21.723097 kubelet[2650]: I0908 23:44:21.723008 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382"} err="failed to get container status \"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382\": rpc error: code = NotFound desc = an error occurred when try to find container \"33ad90bec6f74785a29a0f45c3193eb54448360bc8a48d96509dfdaa151a1382\": not found" Sep 8 23:44:21.723097 kubelet[2650]: I0908 23:44:21.723025 2650 scope.go:117] "RemoveContainer" containerID="8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e" Sep 8 23:44:21.723284 containerd[1521]: time="2025-09-08T23:44:21.723239197Z" level=error msg="ContainerStatus for \"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e\": not found" Sep 8 23:44:21.723421 kubelet[2650]: E0908 23:44:21.723401 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e\": not found" containerID="8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e" Sep 8 23:44:21.723462 kubelet[2650]: I0908 23:44:21.723428 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e"} err="failed to get container status \"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f06c61ba59f20f25724e16eb01958987d7b1a7d91aaeae7bda0e07a44ff460e\": not found" Sep 8 23:44:21.723462 kubelet[2650]: I0908 23:44:21.723444 2650 scope.go:117] "RemoveContainer" containerID="44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793" Sep 8 23:44:21.725266 containerd[1521]: time="2025-09-08T23:44:21.725235178Z" level=info msg="RemoveContainer for \"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\"" Sep 8 23:44:21.736639 containerd[1521]: time="2025-09-08T23:44:21.736603499Z" level=info msg="RemoveContainer for \"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\" returns successfully" Sep 8 23:44:21.736993 kubelet[2650]: I0908 23:44:21.736876 2650 scope.go:117] "RemoveContainer" containerID="44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793" Sep 8 23:44:21.737247 containerd[1521]: time="2025-09-08T23:44:21.737215186Z" level=error msg="ContainerStatus for \"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\": not found" Sep 8 23:44:21.737485 kubelet[2650]: E0908 23:44:21.737461 2650 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\": not found" containerID="44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793" Sep 8 23:44:21.737485 kubelet[2650]: I0908 23:44:21.737499 2650 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793"} err="failed to get container status \"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\": rpc error: code = NotFound desc = an error occurred when try to find container \"44b250cffa2896bb3e451a3f573617ed76a427ef6b254edf87240db551358793\": not found" Sep 8 23:44:22.011897 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-731b15a6ce25a2d1bb61c3d53db970fe23766d67bf98ad2f24fc7785d8add3d6-shm.mount: Deactivated successfully. Sep 8 23:44:22.012001 systemd[1]: var-lib-kubelet-pods-3aab31a6\x2da275\x2d4dfa\x2dbd80\x2ddbb77785a728-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drfw88.mount: Deactivated successfully. Sep 8 23:44:22.012056 systemd[1]: var-lib-kubelet-pods-3aab31a6\x2da275\x2d4dfa\x2dbd80\x2ddbb77785a728-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 8 23:44:22.012118 systemd[1]: var-lib-kubelet-pods-3aab31a6\x2da275\x2d4dfa\x2dbd80\x2ddbb77785a728-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 8 23:44:22.012196 systemd[1]: var-lib-kubelet-pods-00476f70\x2defc5\x2d41b2\x2da6c4\x2d37af519991a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djrgbj.mount: Deactivated successfully. Sep 8 23:44:22.442212 kubelet[2650]: I0908 23:44:22.442100 2650 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00476f70-efc5-41b2-a6c4-37af519991a0" path="/var/lib/kubelet/pods/00476f70-efc5-41b2-a6c4-37af519991a0/volumes" Sep 8 23:44:22.442547 kubelet[2650]: I0908 23:44:22.442514 2650 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3aab31a6-a275-4dfa-bd80-dbb77785a728" path="/var/lib/kubelet/pods/3aab31a6-a275-4dfa-bd80-dbb77785a728/volumes" Sep 8 23:44:22.896741 sshd[4264]: Connection closed by 10.0.0.1 port 36550 Sep 8 23:44:22.896942 sshd-session[4262]: pam_unix(sshd:session): session closed for user core Sep 8 23:44:22.914766 systemd[1]: sshd@22-10.0.0.77:22-10.0.0.1:36550.service: Deactivated successfully. Sep 8 23:44:22.917571 systemd[1]: session-23.scope: Deactivated successfully. Sep 8 23:44:22.917932 systemd[1]: session-23.scope: Consumed 1.548s CPU time, 26.5M memory peak. Sep 8 23:44:22.919351 systemd-logind[1491]: Session 23 logged out. Waiting for processes to exit. Sep 8 23:44:22.921923 systemd[1]: Started sshd@23-10.0.0.77:22-10.0.0.1:52442.service - OpenSSH per-connection server daemon (10.0.0.1:52442). Sep 8 23:44:22.922781 systemd-logind[1491]: Removed session 23. Sep 8 23:44:22.982993 sshd[4417]: Accepted publickey for core from 10.0.0.1 port 52442 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:44:22.984372 sshd-session[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:44:22.988471 systemd-logind[1491]: New session 24 of user core. Sep 8 23:44:23.002512 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 8 23:44:23.440866 kubelet[2650]: E0908 23:44:23.440472 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:44:23.440866 kubelet[2650]: E0908 23:44:23.440624 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:44:23.704663 sshd[4419]: Connection closed by 10.0.0.1 port 52442 Sep 8 23:44:23.705190 sshd-session[4417]: pam_unix(sshd:session): session closed for user core Sep 8 23:44:23.718235 systemd[1]: sshd@23-10.0.0.77:22-10.0.0.1:52442.service: Deactivated successfully. Sep 8 23:44:23.721296 systemd[1]: session-24.scope: Deactivated successfully. Sep 8 23:44:23.723079 systemd-logind[1491]: Session 24 logged out. Waiting for processes to exit. Sep 8 23:44:23.731101 systemd[1]: Started sshd@24-10.0.0.77:22-10.0.0.1:52448.service - OpenSSH per-connection server daemon (10.0.0.1:52448). Sep 8 23:44:23.734547 systemd-logind[1491]: Removed session 24. Sep 8 23:44:23.755604 systemd[1]: Created slice kubepods-burstable-pod8bf28753_719d_4ed4_a5a2_7001e55d9240.slice - libcontainer container kubepods-burstable-pod8bf28753_719d_4ed4_a5a2_7001e55d9240.slice. Sep 8 23:44:23.786440 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 52448 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:44:23.788238 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:44:23.792226 systemd-logind[1491]: New session 25 of user core. Sep 8 23:44:23.800331 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 8 23:44:23.826973 kubelet[2650]: I0908 23:44:23.826913 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8bf28753-719d-4ed4-a5a2-7001e55d9240-etc-cni-netd\") pod \"cilium-jcl9w\" (UID: \"8bf28753-719d-4ed4-a5a2-7001e55d9240\") " pod="kube-system/cilium-jcl9w" Sep 8 23:44:23.826973 kubelet[2650]: I0908 23:44:23.826966 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8bf28753-719d-4ed4-a5a2-7001e55d9240-host-proc-sys-kernel\") pod \"cilium-jcl9w\" (UID: \"8bf28753-719d-4ed4-a5a2-7001e55d9240\") " pod="kube-system/cilium-jcl9w" Sep 8 23:44:23.827101 kubelet[2650]: I0908 23:44:23.826986 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qht44\" (UniqueName: \"kubernetes.io/projected/8bf28753-719d-4ed4-a5a2-7001e55d9240-kube-api-access-qht44\") pod \"cilium-jcl9w\" (UID: \"8bf28753-719d-4ed4-a5a2-7001e55d9240\") " pod="kube-system/cilium-jcl9w" Sep 8 23:44:23.827101 kubelet[2650]: I0908 23:44:23.827002 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8bf28753-719d-4ed4-a5a2-7001e55d9240-hubble-tls\") pod \"cilium-jcl9w\" (UID: \"8bf28753-719d-4ed4-a5a2-7001e55d9240\") " pod="kube-system/cilium-jcl9w" Sep 8 23:44:23.827101 kubelet[2650]: I0908 23:44:23.827027 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8bf28753-719d-4ed4-a5a2-7001e55d9240-cilium-cgroup\") pod \"cilium-jcl9w\" (UID: \"8bf28753-719d-4ed4-a5a2-7001e55d9240\") " pod="kube-system/cilium-jcl9w" Sep 8 23:44:23.827101 kubelet[2650]: I0908 23:44:23.827043 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bf28753-719d-4ed4-a5a2-7001e55d9240-xtables-lock\") pod \"cilium-jcl9w\" (UID: \"8bf28753-719d-4ed4-a5a2-7001e55d9240\") " pod="kube-system/cilium-jcl9w" Sep 8 23:44:23.827101 kubelet[2650]: I0908 23:44:23.827057 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8bf28753-719d-4ed4-a5a2-7001e55d9240-cilium-config-path\") pod \"cilium-jcl9w\" (UID: \"8bf28753-719d-4ed4-a5a2-7001e55d9240\") " pod="kube-system/cilium-jcl9w" Sep 8 23:44:23.827101 kubelet[2650]: I0908 23:44:23.827071 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8bf28753-719d-4ed4-a5a2-7001e55d9240-cilium-run\") pod \"cilium-jcl9w\" (UID: \"8bf28753-719d-4ed4-a5a2-7001e55d9240\") " pod="kube-system/cilium-jcl9w" Sep 8 23:44:23.827335 kubelet[2650]: I0908 23:44:23.827085 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8bf28753-719d-4ed4-a5a2-7001e55d9240-hostproc\") pod \"cilium-jcl9w\" (UID: \"8bf28753-719d-4ed4-a5a2-7001e55d9240\") " pod="kube-system/cilium-jcl9w" Sep 8 23:44:23.827335 kubelet[2650]: I0908 23:44:23.827100 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bf28753-719d-4ed4-a5a2-7001e55d9240-lib-modules\") pod \"cilium-jcl9w\" (UID: \"8bf28753-719d-4ed4-a5a2-7001e55d9240\") " pod="kube-system/cilium-jcl9w" Sep 8 23:44:23.827335 kubelet[2650]: I0908 23:44:23.827114 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8bf28753-719d-4ed4-a5a2-7001e55d9240-clustermesh-secrets\") pod \"cilium-jcl9w\" (UID: \"8bf28753-719d-4ed4-a5a2-7001e55d9240\") " pod="kube-system/cilium-jcl9w" Sep 8 23:44:23.827335 kubelet[2650]: I0908 23:44:23.827131 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8bf28753-719d-4ed4-a5a2-7001e55d9240-bpf-maps\") pod \"cilium-jcl9w\" (UID: \"8bf28753-719d-4ed4-a5a2-7001e55d9240\") " pod="kube-system/cilium-jcl9w" Sep 8 23:44:23.827335 kubelet[2650]: I0908 23:44:23.827145 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8bf28753-719d-4ed4-a5a2-7001e55d9240-cni-path\") pod \"cilium-jcl9w\" (UID: \"8bf28753-719d-4ed4-a5a2-7001e55d9240\") " pod="kube-system/cilium-jcl9w" Sep 8 23:44:23.827335 kubelet[2650]: I0908 23:44:23.827183 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8bf28753-719d-4ed4-a5a2-7001e55d9240-cilium-ipsec-secrets\") pod \"cilium-jcl9w\" (UID: \"8bf28753-719d-4ed4-a5a2-7001e55d9240\") " pod="kube-system/cilium-jcl9w" Sep 8 23:44:23.827461 kubelet[2650]: I0908 23:44:23.827199 2650 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8bf28753-719d-4ed4-a5a2-7001e55d9240-host-proc-sys-net\") pod \"cilium-jcl9w\" (UID: \"8bf28753-719d-4ed4-a5a2-7001e55d9240\") " pod="kube-system/cilium-jcl9w" Sep 8 23:44:23.849484 sshd[4433]: Connection closed by 10.0.0.1 port 52448 Sep 8 23:44:23.849733 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Sep 8 23:44:23.863713 systemd[1]: sshd@24-10.0.0.77:22-10.0.0.1:52448.service: Deactivated successfully. Sep 8 23:44:23.866685 systemd[1]: session-25.scope: Deactivated successfully. Sep 8 23:44:23.868250 systemd-logind[1491]: Session 25 logged out. Waiting for processes to exit. Sep 8 23:44:23.870826 systemd[1]: Started sshd@25-10.0.0.77:22-10.0.0.1:52452.service - OpenSSH per-connection server daemon (10.0.0.1:52452). Sep 8 23:44:23.872374 systemd-logind[1491]: Removed session 25. Sep 8 23:44:23.932664 sshd[4440]: Accepted publickey for core from 10.0.0.1 port 52452 ssh2: RSA SHA256:LTMgZj3AhUbvMnCK/3D915he0nK2GexwG9p0y0Iy9qc Sep 8 23:44:23.934081 sshd-session[4440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 8 23:44:23.951731 systemd-logind[1491]: New session 26 of user core. Sep 8 23:44:23.958319 systemd[1]: Started session-26.scope - Session 26 of User core. Sep 8 23:44:24.060981 kubelet[2650]: E0908 23:44:24.060904 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:44:24.061903 containerd[1521]: time="2025-09-08T23:44:24.061824148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jcl9w,Uid:8bf28753-719d-4ed4-a5a2-7001e55d9240,Namespace:kube-system,Attempt:0,}" Sep 8 23:44:24.083610 containerd[1521]: time="2025-09-08T23:44:24.083567878Z" level=info msg="connecting to shim 16e6ac97f83e823dc0d9fe62cd56504c916ab2d8daabff62522f6d0aee2e1fc9" address="unix:///run/containerd/s/974599a238a85e616fa08290c6be8286ddd7240dd3dcef9bf9a19d0887960bd7" namespace=k8s.io protocol=ttrpc version=3 Sep 8 23:44:24.113401 systemd[1]: Started cri-containerd-16e6ac97f83e823dc0d9fe62cd56504c916ab2d8daabff62522f6d0aee2e1fc9.scope - libcontainer container 16e6ac97f83e823dc0d9fe62cd56504c916ab2d8daabff62522f6d0aee2e1fc9. Sep 8 23:44:24.138243 containerd[1521]: time="2025-09-08T23:44:24.138190644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jcl9w,Uid:8bf28753-719d-4ed4-a5a2-7001e55d9240,Namespace:kube-system,Attempt:0,} returns sandbox id \"16e6ac97f83e823dc0d9fe62cd56504c916ab2d8daabff62522f6d0aee2e1fc9\"" Sep 8 23:44:24.139235 kubelet[2650]: E0908 23:44:24.139037 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:44:24.143567 containerd[1521]: time="2025-09-08T23:44:24.143528655Z" level=info msg="CreateContainer within sandbox \"16e6ac97f83e823dc0d9fe62cd56504c916ab2d8daabff62522f6d0aee2e1fc9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 8 23:44:24.150593 containerd[1521]: time="2025-09-08T23:44:24.150543843Z" level=info msg="Container cee6de15646a29d3c5e3fea343240bab5edf0c88b09c407b6dc0701a1f66ba62: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:44:24.155672 containerd[1521]: time="2025-09-08T23:44:24.155541811Z" level=info msg="CreateContainer within sandbox \"16e6ac97f83e823dc0d9fe62cd56504c916ab2d8daabff62522f6d0aee2e1fc9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cee6de15646a29d3c5e3fea343240bab5edf0c88b09c407b6dc0701a1f66ba62\"" Sep 8 23:44:24.156362 containerd[1521]: time="2025-09-08T23:44:24.156118897Z" level=info msg="StartContainer for \"cee6de15646a29d3c5e3fea343240bab5edf0c88b09c407b6dc0701a1f66ba62\"" Sep 8 23:44:24.157796 containerd[1521]: time="2025-09-08T23:44:24.157765993Z" level=info msg="connecting to shim cee6de15646a29d3c5e3fea343240bab5edf0c88b09c407b6dc0701a1f66ba62" address="unix:///run/containerd/s/974599a238a85e616fa08290c6be8286ddd7240dd3dcef9bf9a19d0887960bd7" protocol=ttrpc version=3 Sep 8 23:44:24.176379 systemd[1]: Started cri-containerd-cee6de15646a29d3c5e3fea343240bab5edf0c88b09c407b6dc0701a1f66ba62.scope - libcontainer container cee6de15646a29d3c5e3fea343240bab5edf0c88b09c407b6dc0701a1f66ba62. Sep 8 23:44:24.202908 containerd[1521]: time="2025-09-08T23:44:24.202869587Z" level=info msg="StartContainer for \"cee6de15646a29d3c5e3fea343240bab5edf0c88b09c407b6dc0701a1f66ba62\" returns successfully" Sep 8 23:44:24.210566 systemd[1]: cri-containerd-cee6de15646a29d3c5e3fea343240bab5edf0c88b09c407b6dc0701a1f66ba62.scope: Deactivated successfully. Sep 8 23:44:24.212418 containerd[1521]: time="2025-09-08T23:44:24.212378039Z" level=info msg="received exit event container_id:\"cee6de15646a29d3c5e3fea343240bab5edf0c88b09c407b6dc0701a1f66ba62\" id:\"cee6de15646a29d3c5e3fea343240bab5edf0c88b09c407b6dc0701a1f66ba62\" pid:4511 exited_at:{seconds:1757375064 nanos:211986315}" Sep 8 23:44:24.212609 containerd[1521]: time="2025-09-08T23:44:24.212450919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cee6de15646a29d3c5e3fea343240bab5edf0c88b09c407b6dc0701a1f66ba62\" id:\"cee6de15646a29d3c5e3fea343240bab5edf0c88b09c407b6dc0701a1f66ba62\" pid:4511 exited_at:{seconds:1757375064 nanos:211986315}" Sep 8 23:44:24.659832 kubelet[2650]: E0908 23:44:24.659793 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:44:24.667188 containerd[1521]: time="2025-09-08T23:44:24.665371243Z" level=info msg="CreateContainer within sandbox \"16e6ac97f83e823dc0d9fe62cd56504c916ab2d8daabff62522f6d0aee2e1fc9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 8 23:44:24.672376 containerd[1521]: time="2025-09-08T23:44:24.672336350Z" level=info msg="Container b7ae6eedad9dd1b0cb05e9043fa39e185bec28939856761147534cdd6d0f30ea: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:44:24.678879 containerd[1521]: time="2025-09-08T23:44:24.678813493Z" level=info msg="CreateContainer within sandbox \"16e6ac97f83e823dc0d9fe62cd56504c916ab2d8daabff62522f6d0aee2e1fc9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b7ae6eedad9dd1b0cb05e9043fa39e185bec28939856761147534cdd6d0f30ea\"" Sep 8 23:44:24.679926 containerd[1521]: time="2025-09-08T23:44:24.679902143Z" level=info msg="StartContainer for \"b7ae6eedad9dd1b0cb05e9043fa39e185bec28939856761147534cdd6d0f30ea\"" Sep 8 23:44:24.680693 containerd[1521]: time="2025-09-08T23:44:24.680669671Z" level=info msg="connecting to shim b7ae6eedad9dd1b0cb05e9043fa39e185bec28939856761147534cdd6d0f30ea" address="unix:///run/containerd/s/974599a238a85e616fa08290c6be8286ddd7240dd3dcef9bf9a19d0887960bd7" protocol=ttrpc version=3 Sep 8 23:44:24.700095 systemd[1]: Started cri-containerd-b7ae6eedad9dd1b0cb05e9043fa39e185bec28939856761147534cdd6d0f30ea.scope - libcontainer container b7ae6eedad9dd1b0cb05e9043fa39e185bec28939856761147534cdd6d0f30ea. Sep 8 23:44:24.730299 containerd[1521]: time="2025-09-08T23:44:24.730256748Z" level=info msg="StartContainer for \"b7ae6eedad9dd1b0cb05e9043fa39e185bec28939856761147534cdd6d0f30ea\" returns successfully" Sep 8 23:44:24.735195 systemd[1]: cri-containerd-b7ae6eedad9dd1b0cb05e9043fa39e185bec28939856761147534cdd6d0f30ea.scope: Deactivated successfully. Sep 8 23:44:24.737460 containerd[1521]: time="2025-09-08T23:44:24.737342017Z" level=info msg="received exit event container_id:\"b7ae6eedad9dd1b0cb05e9043fa39e185bec28939856761147534cdd6d0f30ea\" id:\"b7ae6eedad9dd1b0cb05e9043fa39e185bec28939856761147534cdd6d0f30ea\" pid:4557 exited_at:{seconds:1757375064 nanos:737182415}" Sep 8 23:44:24.737535 containerd[1521]: time="2025-09-08T23:44:24.737455298Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b7ae6eedad9dd1b0cb05e9043fa39e185bec28939856761147534cdd6d0f30ea\" id:\"b7ae6eedad9dd1b0cb05e9043fa39e185bec28939856761147534cdd6d0f30ea\" pid:4557 exited_at:{seconds:1757375064 nanos:737182415}" Sep 8 23:44:25.663289 kubelet[2650]: E0908 23:44:25.663260 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:44:25.669944 containerd[1521]: time="2025-09-08T23:44:25.669589102Z" level=info msg="CreateContainer within sandbox \"16e6ac97f83e823dc0d9fe62cd56504c916ab2d8daabff62522f6d0aee2e1fc9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 8 23:44:25.677385 containerd[1521]: time="2025-09-08T23:44:25.677351134Z" level=info msg="Container e7810ecc417213f8049cc6adc5e25a5c5836fe40f4901081ee7e6908ef2a9ac9: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:44:25.683025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1983575339.mount: Deactivated successfully. Sep 8 23:44:25.693134 containerd[1521]: time="2025-09-08T23:44:25.693084521Z" level=info msg="CreateContainer within sandbox \"16e6ac97f83e823dc0d9fe62cd56504c916ab2d8daabff62522f6d0aee2e1fc9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e7810ecc417213f8049cc6adc5e25a5c5836fe40f4901081ee7e6908ef2a9ac9\"" Sep 8 23:44:25.693590 containerd[1521]: time="2025-09-08T23:44:25.693565765Z" level=info msg="StartContainer for \"e7810ecc417213f8049cc6adc5e25a5c5836fe40f4901081ee7e6908ef2a9ac9\"" Sep 8 23:44:25.695504 containerd[1521]: time="2025-09-08T23:44:25.695450183Z" level=info msg="connecting to shim e7810ecc417213f8049cc6adc5e25a5c5836fe40f4901081ee7e6908ef2a9ac9" address="unix:///run/containerd/s/974599a238a85e616fa08290c6be8286ddd7240dd3dcef9bf9a19d0887960bd7" protocol=ttrpc version=3 Sep 8 23:44:25.720512 systemd[1]: Started cri-containerd-e7810ecc417213f8049cc6adc5e25a5c5836fe40f4901081ee7e6908ef2a9ac9.scope - libcontainer container e7810ecc417213f8049cc6adc5e25a5c5836fe40f4901081ee7e6908ef2a9ac9. Sep 8 23:44:25.754225 systemd[1]: cri-containerd-e7810ecc417213f8049cc6adc5e25a5c5836fe40f4901081ee7e6908ef2a9ac9.scope: Deactivated successfully. Sep 8 23:44:25.756729 containerd[1521]: time="2025-09-08T23:44:25.756201388Z" level=info msg="received exit event container_id:\"e7810ecc417213f8049cc6adc5e25a5c5836fe40f4901081ee7e6908ef2a9ac9\" id:\"e7810ecc417213f8049cc6adc5e25a5c5836fe40f4901081ee7e6908ef2a9ac9\" pid:4602 exited_at:{seconds:1757375065 nanos:755978946}" Sep 8 23:44:25.756729 containerd[1521]: time="2025-09-08T23:44:25.756442111Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e7810ecc417213f8049cc6adc5e25a5c5836fe40f4901081ee7e6908ef2a9ac9\" id:\"e7810ecc417213f8049cc6adc5e25a5c5836fe40f4901081ee7e6908ef2a9ac9\" pid:4602 exited_at:{seconds:1757375065 nanos:755978946}" Sep 8 23:44:25.764070 containerd[1521]: time="2025-09-08T23:44:25.763991701Z" level=info msg="StartContainer for \"e7810ecc417213f8049cc6adc5e25a5c5836fe40f4901081ee7e6908ef2a9ac9\" returns successfully" Sep 8 23:44:25.775941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e7810ecc417213f8049cc6adc5e25a5c5836fe40f4901081ee7e6908ef2a9ac9-rootfs.mount: Deactivated successfully. Sep 8 23:44:26.503318 kubelet[2650]: E0908 23:44:26.503275 2650 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 8 23:44:26.668433 kubelet[2650]: E0908 23:44:26.668252 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:44:26.676185 containerd[1521]: time="2025-09-08T23:44:26.674792690Z" level=info msg="CreateContainer within sandbox \"16e6ac97f83e823dc0d9fe62cd56504c916ab2d8daabff62522f6d0aee2e1fc9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 8 23:44:26.699421 containerd[1521]: time="2025-09-08T23:44:26.699376991Z" level=info msg="Container 0b74a87143ef1b1a529f5666757a51ef869772130bcab3de99524848991b9d5d: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:44:26.709018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount947573690.mount: Deactivated successfully. Sep 8 23:44:26.714706 containerd[1521]: time="2025-09-08T23:44:26.714540568Z" level=info msg="CreateContainer within sandbox \"16e6ac97f83e823dc0d9fe62cd56504c916ab2d8daabff62522f6d0aee2e1fc9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0b74a87143ef1b1a529f5666757a51ef869772130bcab3de99524848991b9d5d\"" Sep 8 23:44:26.715414 containerd[1521]: time="2025-09-08T23:44:26.715339535Z" level=info msg="StartContainer for \"0b74a87143ef1b1a529f5666757a51ef869772130bcab3de99524848991b9d5d\"" Sep 8 23:44:26.717060 containerd[1521]: time="2025-09-08T23:44:26.717029430Z" level=info msg="connecting to shim 0b74a87143ef1b1a529f5666757a51ef869772130bcab3de99524848991b9d5d" address="unix:///run/containerd/s/974599a238a85e616fa08290c6be8286ddd7240dd3dcef9bf9a19d0887960bd7" protocol=ttrpc version=3 Sep 8 23:44:26.745396 systemd[1]: Started cri-containerd-0b74a87143ef1b1a529f5666757a51ef869772130bcab3de99524848991b9d5d.scope - libcontainer container 0b74a87143ef1b1a529f5666757a51ef869772130bcab3de99524848991b9d5d. Sep 8 23:44:26.775514 systemd[1]: cri-containerd-0b74a87143ef1b1a529f5666757a51ef869772130bcab3de99524848991b9d5d.scope: Deactivated successfully. Sep 8 23:44:26.777716 containerd[1521]: time="2025-09-08T23:44:26.776545085Z" level=info msg="received exit event container_id:\"0b74a87143ef1b1a529f5666757a51ef869772130bcab3de99524848991b9d5d\" id:\"0b74a87143ef1b1a529f5666757a51ef869772130bcab3de99524848991b9d5d\" pid:4642 exited_at:{seconds:1757375066 nanos:775717278}" Sep 8 23:44:26.777716 containerd[1521]: time="2025-09-08T23:44:26.776602726Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0b74a87143ef1b1a529f5666757a51ef869772130bcab3de99524848991b9d5d\" id:\"0b74a87143ef1b1a529f5666757a51ef869772130bcab3de99524848991b9d5d\" pid:4642 exited_at:{seconds:1757375066 nanos:775717278}" Sep 8 23:44:26.778365 containerd[1521]: time="2025-09-08T23:44:26.778327022Z" level=info msg="StartContainer for \"0b74a87143ef1b1a529f5666757a51ef869772130bcab3de99524848991b9d5d\" returns successfully" Sep 8 23:44:27.673859 kubelet[2650]: E0908 23:44:27.673808 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:44:27.677658 containerd[1521]: time="2025-09-08T23:44:27.677619427Z" level=info msg="CreateContainer within sandbox \"16e6ac97f83e823dc0d9fe62cd56504c916ab2d8daabff62522f6d0aee2e1fc9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 8 23:44:27.685027 containerd[1521]: time="2025-09-08T23:44:27.684989011Z" level=info msg="Container 4c7c1b17cfc51d31b8bc7698b7f84085d47c33c750995574d10beb6a4007b75b: CDI devices from CRI Config.CDIDevices: []" Sep 8 23:44:27.692203 containerd[1521]: time="2025-09-08T23:44:27.691443147Z" level=info msg="CreateContainer within sandbox \"16e6ac97f83e823dc0d9fe62cd56504c916ab2d8daabff62522f6d0aee2e1fc9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4c7c1b17cfc51d31b8bc7698b7f84085d47c33c750995574d10beb6a4007b75b\"" Sep 8 23:44:27.691557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b74a87143ef1b1a529f5666757a51ef869772130bcab3de99524848991b9d5d-rootfs.mount: Deactivated successfully. Sep 8 23:44:27.692397 containerd[1521]: time="2025-09-08T23:44:27.692305875Z" level=info msg="StartContainer for \"4c7c1b17cfc51d31b8bc7698b7f84085d47c33c750995574d10beb6a4007b75b\"" Sep 8 23:44:27.693185 containerd[1521]: time="2025-09-08T23:44:27.693136482Z" level=info msg="connecting to shim 4c7c1b17cfc51d31b8bc7698b7f84085d47c33c750995574d10beb6a4007b75b" address="unix:///run/containerd/s/974599a238a85e616fa08290c6be8286ddd7240dd3dcef9bf9a19d0887960bd7" protocol=ttrpc version=3 Sep 8 23:44:27.715345 systemd[1]: Started cri-containerd-4c7c1b17cfc51d31b8bc7698b7f84085d47c33c750995574d10beb6a4007b75b.scope - libcontainer container 4c7c1b17cfc51d31b8bc7698b7f84085d47c33c750995574d10beb6a4007b75b. Sep 8 23:44:27.760831 containerd[1521]: time="2025-09-08T23:44:27.760648909Z" level=info msg="StartContainer for \"4c7c1b17cfc51d31b8bc7698b7f84085d47c33c750995574d10beb6a4007b75b\" returns successfully" Sep 8 23:44:27.813962 containerd[1521]: time="2025-09-08T23:44:27.813922412Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c7c1b17cfc51d31b8bc7698b7f84085d47c33c750995574d10beb6a4007b75b\" id:\"4c14215ce49e21f4f79e9b9bc70a4f49db1ad845c0d09ddb2ae24dfdee500f08\" pid:4708 exited_at:{seconds:1757375067 nanos:813628370}" Sep 8 23:44:27.957539 kubelet[2650]: I0908 23:44:27.956969 2650 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-09-08T23:44:27Z","lastTransitionTime":"2025-09-08T23:44:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Sep 8 23:44:28.042213 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 8 23:44:28.679640 kubelet[2650]: E0908 23:44:28.679551 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:44:28.695833 kubelet[2650]: I0908 23:44:28.695711 2650 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jcl9w" podStartSLOduration=5.695694194 podStartE2EDuration="5.695694194s" podCreationTimestamp="2025-09-08 23:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-08 23:44:28.694957347 +0000 UTC m=+82.351020379" watchObservedRunningTime="2025-09-08 23:44:28.695694194 +0000 UTC m=+82.351757226" Sep 8 23:44:30.062359 kubelet[2650]: E0908 23:44:30.062307 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:44:30.431891 containerd[1521]: time="2025-09-08T23:44:30.431675448Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c7c1b17cfc51d31b8bc7698b7f84085d47c33c750995574d10beb6a4007b75b\" id:\"b76e8bd97c7d26b604d7a927c20c1d618ff5b6d7f84dbdb4fd5aba528008ab7e\" pid:5103 exit_status:1 exited_at:{seconds:1757375070 nanos:431404366}" Sep 8 23:44:30.952376 systemd-networkd[1434]: lxc_health: Link UP Sep 8 23:44:30.957773 systemd-networkd[1434]: lxc_health: Gained carrier Sep 8 23:44:32.063625 kubelet[2650]: E0908 23:44:32.063336 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:44:32.543321 systemd-networkd[1434]: lxc_health: Gained IPv6LL Sep 8 23:44:32.549660 containerd[1521]: time="2025-09-08T23:44:32.549597453Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c7c1b17cfc51d31b8bc7698b7f84085d47c33c750995574d10beb6a4007b75b\" id:\"1776fbd5f4a6821853690394200a0f0998bb7ba9e021012c551f05e68809e4be\" pid:5252 exited_at:{seconds:1757375072 nanos:549216570}" Sep 8 23:44:32.687958 kubelet[2650]: E0908 23:44:32.687817 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:44:33.689715 kubelet[2650]: E0908 23:44:33.689654 2650 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 8 23:44:34.674020 containerd[1521]: time="2025-09-08T23:44:34.673971922Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c7c1b17cfc51d31b8bc7698b7f84085d47c33c750995574d10beb6a4007b75b\" id:\"0e8e017ab7225b58b73b223482d7b978dcd4279a4ca367af8858441668be7b61\" pid:5285 exited_at:{seconds:1757375074 nanos:673611919}" Sep 8 23:44:36.794446 containerd[1521]: time="2025-09-08T23:44:36.794374014Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4c7c1b17cfc51d31b8bc7698b7f84085d47c33c750995574d10beb6a4007b75b\" id:\"b59847804f5df0d454401607e97425eeaceca2bb05838183801655a6e384dde7\" pid:5310 exited_at:{seconds:1757375076 nanos:793953411}" Sep 8 23:44:36.802587 sshd[4446]: Connection closed by 10.0.0.1 port 52452 Sep 8 23:44:36.802491 sshd-session[4440]: pam_unix(sshd:session): session closed for user core Sep 8 23:44:36.806742 systemd[1]: sshd@25-10.0.0.77:22-10.0.0.1:52452.service: Deactivated successfully. Sep 8 23:44:36.808735 systemd[1]: session-26.scope: Deactivated successfully. Sep 8 23:44:36.809880 systemd-logind[1491]: Session 26 logged out. Waiting for processes to exit. Sep 8 23:44:36.811097 systemd-logind[1491]: Removed session 26.