Nov 4 12:22:09.325162 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 4 12:22:09.325186 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Nov 4 10:59:33 -00 2025 Nov 4 12:22:09.325195 kernel: KASLR enabled Nov 4 12:22:09.325201 kernel: efi: EFI v2.7 by EDK II Nov 4 12:22:09.325207 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Nov 4 12:22:09.325212 kernel: random: crng init done Nov 4 12:22:09.325219 kernel: secureboot: Secure boot disabled Nov 4 12:22:09.325225 kernel: ACPI: Early table checksum verification disabled Nov 4 12:22:09.325233 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Nov 4 12:22:09.325239 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 4 12:22:09.325246 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:22:09.325252 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:22:09.325258 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:22:09.325264 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:22:09.325273 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:22:09.325279 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:22:09.325286 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:22:09.325292 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:22:09.325310 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 4 12:22:09.325317 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 4 12:22:09.325324 kernel: ACPI: Use ACPI SPCR as default console: No Nov 4 12:22:09.325330 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 4 12:22:09.325339 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Nov 4 12:22:09.325345 kernel: Zone ranges: Nov 4 12:22:09.325352 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 4 12:22:09.325358 kernel: DMA32 empty Nov 4 12:22:09.325364 kernel: Normal empty Nov 4 12:22:09.325370 kernel: Device empty Nov 4 12:22:09.325377 kernel: Movable zone start for each node Nov 4 12:22:09.325383 kernel: Early memory node ranges Nov 4 12:22:09.325389 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Nov 4 12:22:09.325396 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Nov 4 12:22:09.325402 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Nov 4 12:22:09.325408 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Nov 4 12:22:09.325416 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Nov 4 12:22:09.325422 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Nov 4 12:22:09.325428 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Nov 4 12:22:09.325435 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Nov 4 12:22:09.325441 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Nov 4 12:22:09.325447 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 4 12:22:09.325457 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 4 12:22:09.325464 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 4 12:22:09.325471 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 4 12:22:09.325477 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 4 12:22:09.325484 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 4 12:22:09.325491 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Nov 4 12:22:09.325497 kernel: psci: probing for conduit method from ACPI. Nov 4 12:22:09.325504 kernel: psci: PSCIv1.1 detected in firmware. Nov 4 12:22:09.325512 kernel: psci: Using standard PSCI v0.2 function IDs Nov 4 12:22:09.325519 kernel: psci: Trusted OS migration not required Nov 4 12:22:09.325525 kernel: psci: SMC Calling Convention v1.1 Nov 4 12:22:09.325533 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 4 12:22:09.325539 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 4 12:22:09.325546 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 4 12:22:09.325553 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 4 12:22:09.325560 kernel: Detected PIPT I-cache on CPU0 Nov 4 12:22:09.325567 kernel: CPU features: detected: GIC system register CPU interface Nov 4 12:22:09.325573 kernel: CPU features: detected: Spectre-v4 Nov 4 12:22:09.325580 kernel: CPU features: detected: Spectre-BHB Nov 4 12:22:09.325588 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 4 12:22:09.325595 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 4 12:22:09.325601 kernel: CPU features: detected: ARM erratum 1418040 Nov 4 12:22:09.325608 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 4 12:22:09.325615 kernel: alternatives: applying boot alternatives Nov 4 12:22:09.325622 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=03857d169a2df39cb9cf428f5c3ec4e76f72bbd8ea41fdc44c442b7e7c3fbee3 Nov 4 12:22:09.325629 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 4 12:22:09.325636 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 4 12:22:09.325643 kernel: Fallback order for Node 0: 0 Nov 4 12:22:09.325650 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 4 12:22:09.325657 kernel: Policy zone: DMA Nov 4 12:22:09.325664 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 4 12:22:09.325671 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 4 12:22:09.325678 kernel: software IO TLB: area num 4. Nov 4 12:22:09.325684 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 4 12:22:09.325691 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Nov 4 12:22:09.325698 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 4 12:22:09.325705 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 4 12:22:09.325712 kernel: rcu: RCU event tracing is enabled. Nov 4 12:22:09.325719 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 4 12:22:09.325726 kernel: Trampoline variant of Tasks RCU enabled. Nov 4 12:22:09.325734 kernel: Tracing variant of Tasks RCU enabled. Nov 4 12:22:09.325741 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 4 12:22:09.325748 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 4 12:22:09.325755 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 12:22:09.325762 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 4 12:22:09.325769 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 4 12:22:09.325775 kernel: GICv3: 256 SPIs implemented Nov 4 12:22:09.325782 kernel: GICv3: 0 Extended SPIs implemented Nov 4 12:22:09.325789 kernel: Root IRQ handler: gic_handle_irq Nov 4 12:22:09.325795 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 4 12:22:09.325802 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 4 12:22:09.325810 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 4 12:22:09.325817 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 4 12:22:09.325824 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 4 12:22:09.325831 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 4 12:22:09.325838 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 4 12:22:09.325844 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 4 12:22:09.325851 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 4 12:22:09.325858 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 4 12:22:09.325865 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 4 12:22:09.325872 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 4 12:22:09.325879 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 4 12:22:09.325887 kernel: arm-pv: using stolen time PV Nov 4 12:22:09.325894 kernel: Console: colour dummy device 80x25 Nov 4 12:22:09.325902 kernel: ACPI: Core revision 20240827 Nov 4 12:22:09.325909 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 4 12:22:09.325916 kernel: pid_max: default: 32768 minimum: 301 Nov 4 12:22:09.325923 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 4 12:22:09.325930 kernel: landlock: Up and running. Nov 4 12:22:09.325937 kernel: SELinux: Initializing. Nov 4 12:22:09.325945 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 12:22:09.325953 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 4 12:22:09.325960 kernel: rcu: Hierarchical SRCU implementation. Nov 4 12:22:09.325968 kernel: rcu: Max phase no-delay instances is 400. Nov 4 12:22:09.325975 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 4 12:22:09.325982 kernel: Remapping and enabling EFI services. Nov 4 12:22:09.325989 kernel: smp: Bringing up secondary CPUs ... Nov 4 12:22:09.325998 kernel: Detected PIPT I-cache on CPU1 Nov 4 12:22:09.326009 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 4 12:22:09.326018 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 4 12:22:09.326035 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 4 12:22:09.326043 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 4 12:22:09.326050 kernel: Detected PIPT I-cache on CPU2 Nov 4 12:22:09.326058 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 4 12:22:09.326067 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 4 12:22:09.326075 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 4 12:22:09.326082 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 4 12:22:09.326089 kernel: Detected PIPT I-cache on CPU3 Nov 4 12:22:09.326097 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 4 12:22:09.326105 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 4 12:22:09.326112 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 4 12:22:09.326121 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 4 12:22:09.326128 kernel: smp: Brought up 1 node, 4 CPUs Nov 4 12:22:09.326136 kernel: SMP: Total of 4 processors activated. Nov 4 12:22:09.326143 kernel: CPU: All CPU(s) started at EL1 Nov 4 12:22:09.326151 kernel: CPU features: detected: 32-bit EL0 Support Nov 4 12:22:09.326158 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 4 12:22:09.326166 kernel: CPU features: detected: Common not Private translations Nov 4 12:22:09.326174 kernel: CPU features: detected: CRC32 instructions Nov 4 12:22:09.326182 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 4 12:22:09.326189 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 4 12:22:09.326196 kernel: CPU features: detected: LSE atomic instructions Nov 4 12:22:09.326204 kernel: CPU features: detected: Privileged Access Never Nov 4 12:22:09.326211 kernel: CPU features: detected: RAS Extension Support Nov 4 12:22:09.326219 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 4 12:22:09.326226 kernel: alternatives: applying system-wide alternatives Nov 4 12:22:09.326234 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 4 12:22:09.326242 kernel: Memory: 2450400K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12992K init, 1038K bss, 99552K reserved, 16384K cma-reserved) Nov 4 12:22:09.326250 kernel: devtmpfs: initialized Nov 4 12:22:09.326257 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 4 12:22:09.326265 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 4 12:22:09.326272 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 4 12:22:09.326279 kernel: 0 pages in range for non-PLT usage Nov 4 12:22:09.326288 kernel: 515056 pages in range for PLT usage Nov 4 12:22:09.326295 kernel: pinctrl core: initialized pinctrl subsystem Nov 4 12:22:09.326308 kernel: SMBIOS 3.0.0 present. Nov 4 12:22:09.326316 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 4 12:22:09.326323 kernel: DMI: Memory slots populated: 1/1 Nov 4 12:22:09.326331 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 4 12:22:09.326338 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 4 12:22:09.326348 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 4 12:22:09.326355 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 4 12:22:09.326363 kernel: audit: initializing netlink subsys (disabled) Nov 4 12:22:09.326370 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Nov 4 12:22:09.326378 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 4 12:22:09.326385 kernel: cpuidle: using governor menu Nov 4 12:22:09.326392 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 4 12:22:09.326401 kernel: ASID allocator initialised with 32768 entries Nov 4 12:22:09.326409 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 4 12:22:09.326416 kernel: Serial: AMBA PL011 UART driver Nov 4 12:22:09.326424 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 4 12:22:09.326431 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 4 12:22:09.326438 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 4 12:22:09.326446 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 4 12:22:09.326453 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 4 12:22:09.326462 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 4 12:22:09.326469 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 4 12:22:09.326477 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 4 12:22:09.326484 kernel: ACPI: Added _OSI(Module Device) Nov 4 12:22:09.326491 kernel: ACPI: Added _OSI(Processor Device) Nov 4 12:22:09.326499 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 4 12:22:09.326506 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 4 12:22:09.326515 kernel: ACPI: Interpreter enabled Nov 4 12:22:09.326522 kernel: ACPI: Using GIC for interrupt routing Nov 4 12:22:09.326530 kernel: ACPI: MCFG table detected, 1 entries Nov 4 12:22:09.326537 kernel: ACPI: CPU0 has been hot-added Nov 4 12:22:09.326544 kernel: ACPI: CPU1 has been hot-added Nov 4 12:22:09.326552 kernel: ACPI: CPU2 has been hot-added Nov 4 12:22:09.326559 kernel: ACPI: CPU3 has been hot-added Nov 4 12:22:09.326567 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 4 12:22:09.326576 kernel: printk: legacy console [ttyAMA0] enabled Nov 4 12:22:09.326583 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 4 12:22:09.326739 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 4 12:22:09.326828 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 4 12:22:09.326921 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 4 12:22:09.327016 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 4 12:22:09.327137 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 4 12:22:09.327149 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 4 12:22:09.327157 kernel: PCI host bridge to bus 0000:00 Nov 4 12:22:09.327254 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 4 12:22:09.327345 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 4 12:22:09.327483 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 4 12:22:09.327560 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 4 12:22:09.327658 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 4 12:22:09.327749 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 4 12:22:09.327839 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 4 12:22:09.327920 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 4 12:22:09.328003 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 4 12:22:09.328102 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 4 12:22:09.328186 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 4 12:22:09.328268 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 4 12:22:09.328353 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 4 12:22:09.328427 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 4 12:22:09.328503 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 4 12:22:09.328513 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 4 12:22:09.328521 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 4 12:22:09.328529 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 4 12:22:09.328536 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 4 12:22:09.328544 kernel: iommu: Default domain type: Translated Nov 4 12:22:09.328553 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 4 12:22:09.328561 kernel: efivars: Registered efivars operations Nov 4 12:22:09.328569 kernel: vgaarb: loaded Nov 4 12:22:09.328576 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 4 12:22:09.328584 kernel: VFS: Disk quotas dquot_6.6.0 Nov 4 12:22:09.328592 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 4 12:22:09.328599 kernel: pnp: PnP ACPI init Nov 4 12:22:09.328688 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 4 12:22:09.328699 kernel: pnp: PnP ACPI: found 1 devices Nov 4 12:22:09.328706 kernel: NET: Registered PF_INET protocol family Nov 4 12:22:09.328714 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 4 12:22:09.328722 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 4 12:22:09.328730 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 4 12:22:09.328737 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 4 12:22:09.328747 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 4 12:22:09.328754 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 4 12:22:09.328762 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 12:22:09.328769 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 4 12:22:09.328777 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 4 12:22:09.328784 kernel: PCI: CLS 0 bytes, default 64 Nov 4 12:22:09.328792 kernel: kvm [1]: HYP mode not available Nov 4 12:22:09.328801 kernel: Initialise system trusted keyrings Nov 4 12:22:09.328809 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 4 12:22:09.328816 kernel: Key type asymmetric registered Nov 4 12:22:09.328824 kernel: Asymmetric key parser 'x509' registered Nov 4 12:22:09.328831 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 4 12:22:09.328839 kernel: io scheduler mq-deadline registered Nov 4 12:22:09.328846 kernel: io scheduler kyber registered Nov 4 12:22:09.328855 kernel: io scheduler bfq registered Nov 4 12:22:09.328863 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 4 12:22:09.328870 kernel: ACPI: button: Power Button [PWRB] Nov 4 12:22:09.328882 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 4 12:22:09.328964 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 4 12:22:09.328974 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 4 12:22:09.328982 kernel: thunder_xcv, ver 1.0 Nov 4 12:22:09.328991 kernel: thunder_bgx, ver 1.0 Nov 4 12:22:09.328998 kernel: nicpf, ver 1.0 Nov 4 12:22:09.329006 kernel: nicvf, ver 1.0 Nov 4 12:22:09.329106 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 4 12:22:09.329193 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-04T12:22:08 UTC (1762258928) Nov 4 12:22:09.329203 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 4 12:22:09.329211 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 4 12:22:09.329221 kernel: watchdog: NMI not fully supported Nov 4 12:22:09.329229 kernel: watchdog: Hard watchdog permanently disabled Nov 4 12:22:09.329236 kernel: NET: Registered PF_INET6 protocol family Nov 4 12:22:09.329243 kernel: Segment Routing with IPv6 Nov 4 12:22:09.329251 kernel: In-situ OAM (IOAM) with IPv6 Nov 4 12:22:09.329258 kernel: NET: Registered PF_PACKET protocol family Nov 4 12:22:09.329266 kernel: Key type dns_resolver registered Nov 4 12:22:09.329274 kernel: registered taskstats version 1 Nov 4 12:22:09.329282 kernel: Loading compiled-in X.509 certificates Nov 4 12:22:09.329289 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 663f57c0d83c90dfacd5aa64fd10e0e7f59b6b15' Nov 4 12:22:09.329303 kernel: Demotion targets for Node 0: null Nov 4 12:22:09.329312 kernel: Key type .fscrypt registered Nov 4 12:22:09.329319 kernel: Key type fscrypt-provisioning registered Nov 4 12:22:09.329326 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 4 12:22:09.329336 kernel: ima: Allocated hash algorithm: sha1 Nov 4 12:22:09.329344 kernel: ima: No architecture policies found Nov 4 12:22:09.329351 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 4 12:22:09.329359 kernel: clk: Disabling unused clocks Nov 4 12:22:09.329366 kernel: PM: genpd: Disabling unused power domains Nov 4 12:22:09.329373 kernel: Freeing unused kernel memory: 12992K Nov 4 12:22:09.329381 kernel: Run /init as init process Nov 4 12:22:09.329390 kernel: with arguments: Nov 4 12:22:09.329397 kernel: /init Nov 4 12:22:09.329404 kernel: with environment: Nov 4 12:22:09.329411 kernel: HOME=/ Nov 4 12:22:09.329419 kernel: TERM=linux Nov 4 12:22:09.329513 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 4 12:22:09.329593 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 4 12:22:09.329605 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 4 12:22:09.329613 kernel: GPT:16515071 != 27000831 Nov 4 12:22:09.329620 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 4 12:22:09.329628 kernel: GPT:16515071 != 27000831 Nov 4 12:22:09.329635 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 4 12:22:09.329642 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 4 12:22:09.329651 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:22:09.329659 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:22:09.329666 kernel: SCSI subsystem initialized Nov 4 12:22:09.329674 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:22:09.329681 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 4 12:22:09.329689 kernel: device-mapper: uevent: version 1.0.3 Nov 4 12:22:09.329697 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 4 12:22:09.329705 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 4 12:22:09.329713 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:22:09.329720 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:22:09.329727 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:22:09.329735 kernel: raid6: neonx8 gen() 15584 MB/s Nov 4 12:22:09.329742 kernel: raid6: neonx4 gen() 15780 MB/s Nov 4 12:22:09.329750 kernel: raid6: neonx2 gen() 13223 MB/s Nov 4 12:22:09.329757 kernel: raid6: neonx1 gen() 10508 MB/s Nov 4 12:22:09.329765 kernel: raid6: int64x8 gen() 6777 MB/s Nov 4 12:22:09.329773 kernel: raid6: int64x4 gen() 7318 MB/s Nov 4 12:22:09.329780 kernel: raid6: int64x2 gen() 6099 MB/s Nov 4 12:22:09.329787 kernel: raid6: int64x1 gen() 4829 MB/s Nov 4 12:22:09.329795 kernel: raid6: using algorithm neonx4 gen() 15780 MB/s Nov 4 12:22:09.329802 kernel: raid6: .... xor() 12334 MB/s, rmw enabled Nov 4 12:22:09.329810 kernel: raid6: using neon recovery algorithm Nov 4 12:22:09.329819 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:22:09.329826 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:22:09.329833 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:22:09.329840 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:22:09.329847 kernel: xor: measuring software checksum speed Nov 4 12:22:09.329855 kernel: 8regs : 20892 MB/sec Nov 4 12:22:09.329866 kernel: 32regs : 21681 MB/sec Nov 4 12:22:09.329875 kernel: arm64_neon : 27889 MB/sec Nov 4 12:22:09.329882 kernel: xor: using function: arm64_neon (27889 MB/sec) Nov 4 12:22:09.329891 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:22:09.329898 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 4 12:22:09.329906 kernel: BTRFS: device fsid a0f53245-1da9-4f46-990c-2f6a958947c8 devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (207) Nov 4 12:22:09.329914 kernel: BTRFS info (device dm-0): first mount of filesystem a0f53245-1da9-4f46-990c-2f6a958947c8 Nov 4 12:22:09.329921 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 4 12:22:09.329929 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 4 12:22:09.329937 kernel: BTRFS info (device dm-0): enabling free space tree Nov 4 12:22:09.329945 kernel: Invalid ELF header magic: != \u007fELF Nov 4 12:22:09.329952 kernel: loop: module loaded Nov 4 12:22:09.329960 kernel: loop0: detected capacity change from 0 to 91464 Nov 4 12:22:09.329968 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 4 12:22:09.329976 systemd[1]: Successfully made /usr/ read-only. Nov 4 12:22:09.329987 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 12:22:09.329997 systemd[1]: Detected virtualization kvm. Nov 4 12:22:09.330005 systemd[1]: Detected architecture arm64. Nov 4 12:22:09.330012 systemd[1]: Running in initrd. Nov 4 12:22:09.330020 systemd[1]: No hostname configured, using default hostname. Nov 4 12:22:09.330043 systemd[1]: Hostname set to . Nov 4 12:22:09.330051 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 12:22:09.330061 systemd[1]: Queued start job for default target initrd.target. Nov 4 12:22:09.330069 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 12:22:09.330077 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 12:22:09.330086 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 12:22:09.330094 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 4 12:22:09.330102 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 12:22:09.330112 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 4 12:22:09.330127 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 4 12:22:09.330136 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 12:22:09.330145 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 12:22:09.330153 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 4 12:22:09.330163 systemd[1]: Reached target paths.target - Path Units. Nov 4 12:22:09.330171 systemd[1]: Reached target slices.target - Slice Units. Nov 4 12:22:09.330179 systemd[1]: Reached target swap.target - Swaps. Nov 4 12:22:09.330187 systemd[1]: Reached target timers.target - Timer Units. Nov 4 12:22:09.330196 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 12:22:09.330204 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 12:22:09.330212 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 4 12:22:09.330222 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 4 12:22:09.330230 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 12:22:09.330239 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 12:22:09.330247 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 12:22:09.330255 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 12:22:09.330264 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 4 12:22:09.330274 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 4 12:22:09.330282 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 12:22:09.330291 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 4 12:22:09.330306 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 4 12:22:09.330315 systemd[1]: Starting systemd-fsck-usr.service... Nov 4 12:22:09.330323 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 12:22:09.330332 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 12:22:09.330342 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 12:22:09.330351 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 4 12:22:09.330359 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 12:22:09.330368 systemd[1]: Finished systemd-fsck-usr.service. Nov 4 12:22:09.330378 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 12:22:09.330405 systemd-journald[343]: Collecting audit messages is disabled. Nov 4 12:22:09.330425 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 4 12:22:09.330435 systemd-journald[343]: Journal started Nov 4 12:22:09.330452 systemd-journald[343]: Runtime Journal (/run/log/journal/2b788d43372e4615817d535a50017a21) is 6M, max 48.5M, 42.4M free. Nov 4 12:22:09.332408 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 12:22:09.333407 systemd-modules-load[344]: Inserted module 'br_netfilter' Nov 4 12:22:09.334762 kernel: Bridge firewalling registered Nov 4 12:22:09.336115 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 12:22:09.338117 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 12:22:09.340130 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 12:22:09.343406 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 4 12:22:09.345117 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 12:22:09.347146 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 12:22:09.349107 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 12:22:09.369100 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 12:22:09.371303 systemd-tmpfiles[365]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 4 12:22:09.371725 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 12:22:09.376260 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 12:22:09.380314 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 12:22:09.385121 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 12:22:09.394616 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 4 12:22:09.409522 dracut-cmdline[388]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=03857d169a2df39cb9cf428f5c3ec4e76f72bbd8ea41fdc44c442b7e7c3fbee3 Nov 4 12:22:09.431218 systemd-resolved[382]: Positive Trust Anchors: Nov 4 12:22:09.431234 systemd-resolved[382]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 12:22:09.431237 systemd-resolved[382]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 12:22:09.431272 systemd-resolved[382]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 12:22:09.454092 systemd-resolved[382]: Defaulting to hostname 'linux'. Nov 4 12:22:09.455021 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 12:22:09.456146 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 12:22:09.487049 kernel: Loading iSCSI transport class v2.0-870. Nov 4 12:22:09.495055 kernel: iscsi: registered transport (tcp) Nov 4 12:22:09.508358 kernel: iscsi: registered transport (qla4xxx) Nov 4 12:22:09.508385 kernel: QLogic iSCSI HBA Driver Nov 4 12:22:09.527910 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 12:22:09.546849 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 12:22:09.548949 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 12:22:09.591346 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 4 12:22:09.594635 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 4 12:22:09.597173 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 4 12:22:09.632794 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 4 12:22:09.635240 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 12:22:09.662387 systemd-udevd[628]: Using default interface naming scheme 'v257'. Nov 4 12:22:09.669858 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 12:22:09.672129 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 4 12:22:09.696120 dracut-pre-trigger[696]: rd.md=0: removing MD RAID activation Nov 4 12:22:09.696193 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 12:22:09.701528 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 12:22:09.717662 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 12:22:09.720110 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 12:22:09.744999 systemd-networkd[742]: lo: Link UP Nov 4 12:22:09.745008 systemd-networkd[742]: lo: Gained carrier Nov 4 12:22:09.745503 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 12:22:09.746895 systemd[1]: Reached target network.target - Network. Nov 4 12:22:09.771428 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 12:22:09.775879 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 4 12:22:09.813719 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 4 12:22:09.832083 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 4 12:22:09.843116 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 4 12:22:09.850944 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 12:22:09.853245 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 4 12:22:09.878168 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 12:22:09.879246 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 12:22:09.880496 disk-uuid[799]: Primary Header is updated. Nov 4 12:22:09.880496 disk-uuid[799]: Secondary Entries is updated. Nov 4 12:22:09.880496 disk-uuid[799]: Secondary Header is updated. Nov 4 12:22:09.881525 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 12:22:09.883326 systemd-networkd[742]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 12:22:09.883330 systemd-networkd[742]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 12:22:09.883759 systemd-networkd[742]: eth0: Link UP Nov 4 12:22:09.884004 systemd-networkd[742]: eth0: Gained carrier Nov 4 12:22:09.884013 systemd-networkd[742]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 12:22:09.885560 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 12:22:09.903113 systemd-networkd[742]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 12:22:09.921605 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 12:22:09.950151 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 4 12:22:09.951777 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 12:22:09.953289 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 12:22:09.955360 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 12:22:09.958519 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 4 12:22:09.989526 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 4 12:22:10.766450 systemd-resolved[382]: Detected conflict on linux IN A 10.0.0.89 Nov 4 12:22:10.766469 systemd-resolved[382]: Hostname conflict, changing published hostname from 'linux' to 'linux2'. Nov 4 12:22:10.916303 disk-uuid[802]: Warning: The kernel is still using the old partition table. Nov 4 12:22:10.916303 disk-uuid[802]: The new table will be used at the next reboot or after you Nov 4 12:22:10.916303 disk-uuid[802]: run partprobe(8) or kpartx(8) Nov 4 12:22:10.916303 disk-uuid[802]: The operation has completed successfully. Nov 4 12:22:10.924052 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 4 12:22:10.924190 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 4 12:22:10.927155 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 4 12:22:10.961062 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (832) Nov 4 12:22:10.961097 kernel: BTRFS info (device vda6): first mount of filesystem 316b847e-0ec6-41c0-b528-97fbff93a67a Nov 4 12:22:10.961115 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 4 12:22:10.964643 kernel: BTRFS info (device vda6): turning on async discard Nov 4 12:22:10.964696 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 12:22:10.970034 kernel: BTRFS info (device vda6): last unmount of filesystem 316b847e-0ec6-41c0-b528-97fbff93a67a Nov 4 12:22:10.972075 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 4 12:22:10.974095 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 4 12:22:11.076909 ignition[851]: Ignition 2.22.0 Nov 4 12:22:11.076927 ignition[851]: Stage: fetch-offline Nov 4 12:22:11.076966 ignition[851]: no configs at "/usr/lib/ignition/base.d" Nov 4 12:22:11.076976 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:22:11.077082 ignition[851]: parsed url from cmdline: "" Nov 4 12:22:11.077085 ignition[851]: no config URL provided Nov 4 12:22:11.077090 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Nov 4 12:22:11.077098 ignition[851]: no config at "/usr/lib/ignition/user.ign" Nov 4 12:22:11.077136 ignition[851]: op(1): [started] loading QEMU firmware config module Nov 4 12:22:11.077140 ignition[851]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 4 12:22:11.082767 ignition[851]: op(1): [finished] loading QEMU firmware config module Nov 4 12:22:11.099164 systemd-networkd[742]: eth0: Gained IPv6LL Nov 4 12:22:11.127421 ignition[851]: parsing config with SHA512: a09e0fdb28c09eb7e9f4ca1a9532619cac39c65b9d527ce41efeb142cb347cde35d434e3cc8100b7ddf4d5bde2e2bc3bf648d92f891e3ce387bb77c2446081ab Nov 4 12:22:11.133011 unknown[851]: fetched base config from "system" Nov 4 12:22:11.133135 unknown[851]: fetched user config from "qemu" Nov 4 12:22:11.133635 ignition[851]: fetch-offline: fetch-offline passed Nov 4 12:22:11.135718 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 12:22:11.133694 ignition[851]: Ignition finished successfully Nov 4 12:22:11.137052 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 4 12:22:11.137830 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 4 12:22:11.167493 ignition[870]: Ignition 2.22.0 Nov 4 12:22:11.167508 ignition[870]: Stage: kargs Nov 4 12:22:11.167645 ignition[870]: no configs at "/usr/lib/ignition/base.d" Nov 4 12:22:11.167652 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:22:11.168443 ignition[870]: kargs: kargs passed Nov 4 12:22:11.168484 ignition[870]: Ignition finished successfully Nov 4 12:22:11.173294 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 4 12:22:11.175515 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 4 12:22:11.206544 ignition[878]: Ignition 2.22.0 Nov 4 12:22:11.206562 ignition[878]: Stage: disks Nov 4 12:22:11.206690 ignition[878]: no configs at "/usr/lib/ignition/base.d" Nov 4 12:22:11.209846 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 4 12:22:11.206698 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:22:11.210968 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 4 12:22:11.207574 ignition[878]: disks: disks passed Nov 4 12:22:11.212752 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 4 12:22:11.207617 ignition[878]: Ignition finished successfully Nov 4 12:22:11.214841 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 12:22:11.216759 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 12:22:11.218128 systemd[1]: Reached target basic.target - Basic System. Nov 4 12:22:11.220828 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 4 12:22:11.256345 systemd-fsck[887]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 4 12:22:11.260767 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 4 12:22:11.263741 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 4 12:22:11.329058 kernel: EXT4-fs (vda9): mounted filesystem 9b363c44-0d55-4856-b006-3e673304a340 r/w with ordered data mode. Quota mode: none. Nov 4 12:22:11.329604 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 4 12:22:11.330828 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 4 12:22:11.333322 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 12:22:11.335021 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 4 12:22:11.335962 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 4 12:22:11.335996 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 4 12:22:11.336021 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 12:22:11.354310 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 4 12:22:11.357199 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 4 12:22:11.362045 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (895) Nov 4 12:22:11.362073 kernel: BTRFS info (device vda6): first mount of filesystem 316b847e-0ec6-41c0-b528-97fbff93a67a Nov 4 12:22:11.362090 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 4 12:22:11.366766 kernel: BTRFS info (device vda6): turning on async discard Nov 4 12:22:11.366805 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 12:22:11.367477 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 12:22:11.397190 initrd-setup-root[919]: cut: /sysroot/etc/passwd: No such file or directory Nov 4 12:22:11.401820 initrd-setup-root[926]: cut: /sysroot/etc/group: No such file or directory Nov 4 12:22:11.405904 initrd-setup-root[933]: cut: /sysroot/etc/shadow: No such file or directory Nov 4 12:22:11.409726 initrd-setup-root[940]: cut: /sysroot/etc/gshadow: No such file or directory Nov 4 12:22:11.475589 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 4 12:22:11.478445 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 4 12:22:11.480072 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 4 12:22:11.503078 kernel: BTRFS info (device vda6): last unmount of filesystem 316b847e-0ec6-41c0-b528-97fbff93a67a Nov 4 12:22:11.502192 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 4 12:22:11.514151 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 4 12:22:11.528946 ignition[1009]: INFO : Ignition 2.22.0 Nov 4 12:22:11.528946 ignition[1009]: INFO : Stage: mount Nov 4 12:22:11.530515 ignition[1009]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 12:22:11.530515 ignition[1009]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:22:11.530515 ignition[1009]: INFO : mount: mount passed Nov 4 12:22:11.530515 ignition[1009]: INFO : Ignition finished successfully Nov 4 12:22:11.534058 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 4 12:22:11.536576 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 4 12:22:12.331476 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 4 12:22:12.362610 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1020) Nov 4 12:22:12.362667 kernel: BTRFS info (device vda6): first mount of filesystem 316b847e-0ec6-41c0-b528-97fbff93a67a Nov 4 12:22:12.362678 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 4 12:22:12.366227 kernel: BTRFS info (device vda6): turning on async discard Nov 4 12:22:12.366286 kernel: BTRFS info (device vda6): enabling free space tree Nov 4 12:22:12.367823 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 4 12:22:12.409097 ignition[1038]: INFO : Ignition 2.22.0 Nov 4 12:22:12.409097 ignition[1038]: INFO : Stage: files Nov 4 12:22:12.411152 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 12:22:12.411152 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:22:12.411152 ignition[1038]: DEBUG : files: compiled without relabeling support, skipping Nov 4 12:22:12.415128 ignition[1038]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 4 12:22:12.415128 ignition[1038]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 4 12:22:12.415128 ignition[1038]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 4 12:22:12.415128 ignition[1038]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 4 12:22:12.415128 ignition[1038]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 4 12:22:12.415067 unknown[1038]: wrote ssh authorized keys file for user: core Nov 4 12:22:12.423164 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 4 12:22:12.423164 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Nov 4 12:22:12.504010 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 4 12:22:12.632779 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Nov 4 12:22:12.632779 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 12:22:12.636440 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 4 12:22:12.891704 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 4 12:22:13.149361 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 4 12:22:13.149361 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 4 12:22:13.153210 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 4 12:22:13.153210 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 4 12:22:13.153210 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 4 12:22:13.153210 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 12:22:13.153210 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 4 12:22:13.153210 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 12:22:13.153210 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 4 12:22:13.153210 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 12:22:13.153210 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 4 12:22:13.153210 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 4 12:22:13.170887 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 4 12:22:13.170887 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 4 12:22:13.170887 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Nov 4 12:22:13.429058 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 4 12:22:13.615671 ignition[1038]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Nov 4 12:22:13.615671 ignition[1038]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 4 12:22:13.619399 ignition[1038]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 12:22:13.619399 ignition[1038]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 4 12:22:13.619399 ignition[1038]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 4 12:22:13.619399 ignition[1038]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 4 12:22:13.619399 ignition[1038]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 12:22:13.628068 ignition[1038]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 4 12:22:13.628068 ignition[1038]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 4 12:22:13.628068 ignition[1038]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 4 12:22:13.636439 ignition[1038]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 12:22:13.640115 ignition[1038]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 4 12:22:13.642172 ignition[1038]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 4 12:22:13.642172 ignition[1038]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 4 12:22:13.642172 ignition[1038]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 4 12:22:13.642172 ignition[1038]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 4 12:22:13.642172 ignition[1038]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 4 12:22:13.642172 ignition[1038]: INFO : files: files passed Nov 4 12:22:13.642172 ignition[1038]: INFO : Ignition finished successfully Nov 4 12:22:13.643386 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 4 12:22:13.645775 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 4 12:22:13.647770 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 4 12:22:13.657532 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 4 12:22:13.657844 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 4 12:22:13.661097 initrd-setup-root-after-ignition[1069]: grep: /sysroot/oem/oem-release: No such file or directory Nov 4 12:22:13.663331 initrd-setup-root-after-ignition[1071]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 12:22:13.663331 initrd-setup-root-after-ignition[1071]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 4 12:22:13.666490 initrd-setup-root-after-ignition[1075]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 4 12:22:13.669001 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 12:22:13.670402 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 4 12:22:13.672987 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 4 12:22:13.733099 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 4 12:22:13.733229 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 4 12:22:13.735427 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 4 12:22:13.737256 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 4 12:22:13.739244 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 4 12:22:13.740134 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 4 12:22:13.756110 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 12:22:13.758624 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 4 12:22:13.785462 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 4 12:22:13.785614 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 4 12:22:13.787768 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 12:22:13.789900 systemd[1]: Stopped target timers.target - Timer Units. Nov 4 12:22:13.791705 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 4 12:22:13.791831 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 4 12:22:13.794282 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 4 12:22:13.796133 systemd[1]: Stopped target basic.target - Basic System. Nov 4 12:22:13.797793 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 4 12:22:13.799463 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 4 12:22:13.801361 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 4 12:22:13.803226 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 4 12:22:13.805098 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 4 12:22:13.806959 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 4 12:22:13.808924 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 4 12:22:13.810891 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 4 12:22:13.812594 systemd[1]: Stopped target swap.target - Swaps. Nov 4 12:22:13.814009 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 4 12:22:13.814149 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 4 12:22:13.816420 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 4 12:22:13.818350 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 12:22:13.820159 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 4 12:22:13.821110 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 12:22:13.823122 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 4 12:22:13.823246 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 4 12:22:13.825953 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 4 12:22:13.826092 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 4 12:22:13.828054 systemd[1]: Stopped target paths.target - Path Units. Nov 4 12:22:13.829596 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 4 12:22:13.833084 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 12:22:13.834297 systemd[1]: Stopped target slices.target - Slice Units. Nov 4 12:22:13.836238 systemd[1]: Stopped target sockets.target - Socket Units. Nov 4 12:22:13.837795 systemd[1]: iscsid.socket: Deactivated successfully. Nov 4 12:22:13.837884 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 4 12:22:13.839360 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 4 12:22:13.839442 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 4 12:22:13.840911 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 4 12:22:13.841039 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 4 12:22:13.842769 systemd[1]: ignition-files.service: Deactivated successfully. Nov 4 12:22:13.842875 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 4 12:22:13.845120 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 4 12:22:13.847709 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 4 12:22:13.848800 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 4 12:22:13.848916 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 12:22:13.850760 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 4 12:22:13.850862 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 12:22:13.852503 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 4 12:22:13.852604 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 4 12:22:13.857868 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 4 12:22:13.862174 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 4 12:22:13.873389 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 4 12:22:13.878151 ignition[1097]: INFO : Ignition 2.22.0 Nov 4 12:22:13.878151 ignition[1097]: INFO : Stage: umount Nov 4 12:22:13.880445 ignition[1097]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 4 12:22:13.880445 ignition[1097]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 4 12:22:13.880445 ignition[1097]: INFO : umount: umount passed Nov 4 12:22:13.880445 ignition[1097]: INFO : Ignition finished successfully Nov 4 12:22:13.878192 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 4 12:22:13.878318 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 4 12:22:13.881530 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 4 12:22:13.883063 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 4 12:22:13.884297 systemd[1]: Stopped target network.target - Network. Nov 4 12:22:13.885184 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 4 12:22:13.885237 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 4 12:22:13.886724 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 4 12:22:13.886767 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 4 12:22:13.888488 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 4 12:22:13.888540 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 4 12:22:13.890043 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 4 12:22:13.890088 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 4 12:22:13.891813 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 4 12:22:13.891861 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 4 12:22:13.893619 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 4 12:22:13.895307 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 4 12:22:13.904429 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 4 12:22:13.904524 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 4 12:22:13.908163 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 4 12:22:13.908278 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 4 12:22:13.912116 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 4 12:22:13.913952 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 4 12:22:13.913988 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 4 12:22:13.916675 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 4 12:22:13.918210 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 4 12:22:13.918283 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 4 12:22:13.920338 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 12:22:13.920380 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 12:22:13.922064 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 4 12:22:13.922106 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 4 12:22:13.924090 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 12:22:13.935114 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 4 12:22:13.937078 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 12:22:13.939385 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 4 12:22:13.939448 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 4 12:22:13.941502 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 4 12:22:13.941535 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 12:22:13.943298 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 4 12:22:13.943350 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 4 12:22:13.945915 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 4 12:22:13.945966 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 4 12:22:13.948596 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 4 12:22:13.948645 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 4 12:22:13.956652 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 4 12:22:13.957711 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 4 12:22:13.957775 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 12:22:13.959945 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 4 12:22:13.960000 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 12:22:13.962095 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 4 12:22:13.962144 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 12:22:13.964423 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 4 12:22:13.964470 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 12:22:13.966383 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 4 12:22:13.966441 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 12:22:13.969233 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 4 12:22:13.969354 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 4 12:22:13.970619 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 4 12:22:13.970694 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 4 12:22:13.973497 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 4 12:22:13.977593 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 4 12:22:14.000417 systemd[1]: Switching root. Nov 4 12:22:14.041459 systemd-journald[343]: Journal stopped Nov 4 12:22:14.832845 systemd-journald[343]: Received SIGTERM from PID 1 (systemd). Nov 4 12:22:14.832896 kernel: SELinux: policy capability network_peer_controls=1 Nov 4 12:22:14.832911 kernel: SELinux: policy capability open_perms=1 Nov 4 12:22:14.832920 kernel: SELinux: policy capability extended_socket_class=1 Nov 4 12:22:14.832933 kernel: SELinux: policy capability always_check_network=0 Nov 4 12:22:14.832943 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 4 12:22:14.832953 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 4 12:22:14.832963 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 4 12:22:14.832979 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 4 12:22:14.832988 kernel: SELinux: policy capability userspace_initial_context=0 Nov 4 12:22:14.832998 kernel: audit: type=1403 audit(1762258934.250:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 4 12:22:14.833010 systemd[1]: Successfully loaded SELinux policy in 61.972ms. Nov 4 12:22:14.833023 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.515ms. Nov 4 12:22:14.833112 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 4 12:22:14.833124 systemd[1]: Detected virtualization kvm. Nov 4 12:22:14.833134 systemd[1]: Detected architecture arm64. Nov 4 12:22:14.833145 systemd[1]: Detected first boot. Nov 4 12:22:14.833155 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 4 12:22:14.833168 zram_generator::config[1141]: No configuration found. Nov 4 12:22:14.833179 kernel: NET: Registered PF_VSOCK protocol family Nov 4 12:22:14.833190 systemd[1]: Populated /etc with preset unit settings. Nov 4 12:22:14.833201 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 4 12:22:14.833212 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 4 12:22:14.833222 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 4 12:22:14.833234 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 4 12:22:14.833246 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 4 12:22:14.833267 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 4 12:22:14.833279 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 4 12:22:14.833290 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 4 12:22:14.833300 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 4 12:22:14.833311 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 4 12:22:14.833322 systemd[1]: Created slice user.slice - User and Session Slice. Nov 4 12:22:14.833337 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 4 12:22:14.833348 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 4 12:22:14.833359 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 4 12:22:14.833369 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 4 12:22:14.833383 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 4 12:22:14.833394 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 4 12:22:14.833404 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 4 12:22:14.833416 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 4 12:22:14.833427 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 4 12:22:14.833437 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 4 12:22:14.833448 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 4 12:22:14.833458 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 4 12:22:14.833468 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 4 12:22:14.833481 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 4 12:22:14.833493 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 4 12:22:14.833503 systemd[1]: Reached target slices.target - Slice Units. Nov 4 12:22:14.833513 systemd[1]: Reached target swap.target - Swaps. Nov 4 12:22:14.833524 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 4 12:22:14.833535 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 4 12:22:14.833545 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 4 12:22:14.833557 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 4 12:22:14.833568 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 4 12:22:14.833579 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 4 12:22:14.833590 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 4 12:22:14.833601 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 4 12:22:14.833611 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 4 12:22:14.833621 systemd[1]: Mounting media.mount - External Media Directory... Nov 4 12:22:14.833633 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 4 12:22:14.833644 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 4 12:22:14.833654 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 4 12:22:14.833665 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 4 12:22:14.833676 systemd[1]: Reached target machines.target - Containers. Nov 4 12:22:14.833686 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 4 12:22:14.833697 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 12:22:14.833709 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 4 12:22:14.833721 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 4 12:22:14.833732 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 12:22:14.833742 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 12:22:14.833753 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 12:22:14.833764 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 4 12:22:14.833774 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 12:22:14.833786 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 4 12:22:14.833800 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 4 12:22:14.833811 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 4 12:22:14.833821 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 4 12:22:14.833836 systemd[1]: Stopped systemd-fsck-usr.service. Nov 4 12:22:14.833850 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 12:22:14.833869 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 4 12:22:14.833882 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 4 12:22:14.833923 kernel: fuse: init (API version 7.41) Nov 4 12:22:14.833933 kernel: ACPI: bus type drm_connector registered Nov 4 12:22:14.833944 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 4 12:22:14.833955 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 4 12:22:14.833965 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 4 12:22:14.833976 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 4 12:22:14.833987 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 4 12:22:14.833998 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 4 12:22:14.834009 systemd[1]: Mounted media.mount - External Media Directory. Nov 4 12:22:14.834019 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 4 12:22:14.834046 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 4 12:22:14.834074 systemd-journald[1218]: Collecting audit messages is disabled. Nov 4 12:22:14.834097 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 4 12:22:14.834109 systemd-journald[1218]: Journal started Nov 4 12:22:14.834130 systemd-journald[1218]: Runtime Journal (/run/log/journal/2b788d43372e4615817d535a50017a21) is 6M, max 48.5M, 42.4M free. Nov 4 12:22:14.834164 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 4 12:22:14.613183 systemd[1]: Queued start job for default target multi-user.target. Nov 4 12:22:14.630857 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 4 12:22:14.631267 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 4 12:22:14.838050 systemd[1]: Started systemd-journald.service - Journal Service. Nov 4 12:22:14.838938 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 4 12:22:14.840429 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 4 12:22:14.840593 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 4 12:22:14.842059 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 12:22:14.842220 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 12:22:14.843482 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 12:22:14.843636 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 12:22:14.844899 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 12:22:14.845148 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 12:22:14.846504 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 4 12:22:14.846661 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 4 12:22:14.847926 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 12:22:14.848105 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 12:22:14.849514 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 4 12:22:14.850932 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 4 12:22:14.852966 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 4 12:22:14.854711 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 4 12:22:14.866800 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 4 12:22:14.868266 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 4 12:22:14.870497 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 4 12:22:14.872489 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 4 12:22:14.873681 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 4 12:22:14.873710 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 4 12:22:14.875550 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 4 12:22:14.876891 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 12:22:14.882772 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 4 12:22:14.884789 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 4 12:22:14.885966 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 12:22:14.886880 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 4 12:22:14.888099 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 12:22:14.891184 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 12:22:14.892486 systemd-journald[1218]: Time spent on flushing to /var/log/journal/2b788d43372e4615817d535a50017a21 is 11.463ms for 888 entries. Nov 4 12:22:14.892486 systemd-journald[1218]: System Journal (/var/log/journal/2b788d43372e4615817d535a50017a21) is 8M, max 163.5M, 155.5M free. Nov 4 12:22:14.915163 systemd-journald[1218]: Received client request to flush runtime journal. Nov 4 12:22:14.915212 kernel: loop1: detected capacity change from 0 to 100624 Nov 4 12:22:14.893337 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 4 12:22:14.896847 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 4 12:22:14.901360 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 4 12:22:14.903859 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 4 12:22:14.905277 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 4 12:22:14.914566 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 4 12:22:14.917101 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 4 12:22:14.924287 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 4 12:22:14.928753 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 4 12:22:14.929867 kernel: loop2: detected capacity change from 0 to 200800 Nov 4 12:22:14.930911 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Nov 4 12:22:14.931169 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Nov 4 12:22:14.934547 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 12:22:14.944304 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 4 12:22:14.947529 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 4 12:22:14.957554 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 4 12:22:14.962042 kernel: loop3: detected capacity change from 0 to 119344 Nov 4 12:22:14.974109 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 4 12:22:14.976766 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 4 12:22:14.980051 kernel: loop4: detected capacity change from 0 to 100624 Nov 4 12:22:14.981185 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 4 12:22:14.985563 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 4 12:22:14.986045 kernel: loop5: detected capacity change from 0 to 200800 Nov 4 12:22:14.993044 kernel: loop6: detected capacity change from 0 to 119344 Nov 4 12:22:14.996953 (sd-merge)[1280]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 4 12:22:15.000646 (sd-merge)[1280]: Merged extensions into '/usr'. Nov 4 12:22:15.004181 systemd[1]: Reload requested from client PID 1257 ('systemd-sysext') (unit systemd-sysext.service)... Nov 4 12:22:15.004199 systemd[1]: Reloading... Nov 4 12:22:15.005661 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Nov 4 12:22:15.005882 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Nov 4 12:22:15.055274 zram_generator::config[1317]: No configuration found. Nov 4 12:22:15.094678 systemd-resolved[1279]: Positive Trust Anchors: Nov 4 12:22:15.094951 systemd-resolved[1279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 4 12:22:15.094958 systemd-resolved[1279]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 4 12:22:15.094989 systemd-resolved[1279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 4 12:22:15.101228 systemd-resolved[1279]: Defaulting to hostname 'linux'. Nov 4 12:22:15.190206 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 4 12:22:15.190477 systemd[1]: Reloading finished in 185 ms. Nov 4 12:22:15.204476 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 4 12:22:15.205904 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 4 12:22:15.207283 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 4 12:22:15.208813 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 4 12:22:15.214389 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 4 12:22:15.236295 systemd[1]: Starting ensure-sysext.service... Nov 4 12:22:15.238071 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 4 12:22:15.246826 systemd[1]: Reload requested from client PID 1352 ('systemctl') (unit ensure-sysext.service)... Nov 4 12:22:15.246839 systemd[1]: Reloading... Nov 4 12:22:15.261810 systemd-tmpfiles[1353]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 4 12:22:15.262182 systemd-tmpfiles[1353]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 4 12:22:15.262591 systemd-tmpfiles[1353]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 4 12:22:15.262885 systemd-tmpfiles[1353]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 4 12:22:15.263535 systemd-tmpfiles[1353]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 4 12:22:15.263841 systemd-tmpfiles[1353]: ACLs are not supported, ignoring. Nov 4 12:22:15.264038 systemd-tmpfiles[1353]: ACLs are not supported, ignoring. Nov 4 12:22:15.268004 systemd-tmpfiles[1353]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 12:22:15.268664 systemd-tmpfiles[1353]: Skipping /boot Nov 4 12:22:15.278184 systemd-tmpfiles[1353]: Detected autofs mount point /boot during canonicalization of boot. Nov 4 12:22:15.278302 systemd-tmpfiles[1353]: Skipping /boot Nov 4 12:22:15.298088 zram_generator::config[1383]: No configuration found. Nov 4 12:22:15.423551 systemd[1]: Reloading finished in 176 ms. Nov 4 12:22:15.443055 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 4 12:22:15.461823 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 4 12:22:15.468738 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 12:22:15.470649 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 4 12:22:15.491924 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 4 12:22:15.494053 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 4 12:22:15.499350 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 4 12:22:15.502289 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 4 12:22:15.506654 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 12:22:15.510443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 12:22:15.514988 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 12:22:15.521012 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 12:22:15.522022 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 12:22:15.522157 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 12:22:15.525110 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 4 12:22:15.528881 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 12:22:15.531067 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 12:22:15.532689 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 12:22:15.532825 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 12:22:15.541361 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 4 12:22:15.544118 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 4 12:22:15.545889 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 12:22:15.546047 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 12:22:15.549299 systemd-udevd[1429]: Using default interface naming scheme 'v257'. Nov 4 12:22:15.549697 augenrules[1451]: No rules Nov 4 12:22:15.550387 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 12:22:15.550578 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 12:22:15.558306 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 12:22:15.561198 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 4 12:22:15.562280 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 4 12:22:15.564779 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 4 12:22:15.574266 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 4 12:22:15.577773 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 4 12:22:15.579081 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 4 12:22:15.579199 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 4 12:22:15.579323 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 4 12:22:15.580339 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 4 12:22:15.582917 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 4 12:22:15.583148 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 4 12:22:15.585700 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 4 12:22:15.585839 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 4 12:22:15.587589 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 4 12:22:15.587720 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 4 12:22:15.589427 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 4 12:22:15.589580 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 4 12:22:15.596303 systemd[1]: Finished ensure-sysext.service. Nov 4 12:22:15.603270 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 4 12:22:15.604885 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 4 12:22:15.604939 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 4 12:22:15.607179 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 4 12:22:15.616420 augenrules[1459]: /sbin/augenrules: No change Nov 4 12:22:15.627151 augenrules[1506]: No rules Nov 4 12:22:15.628736 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 12:22:15.629163 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 12:22:15.658801 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 4 12:22:15.690944 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 4 12:22:15.692569 systemd[1]: Reached target time-set.target - System Time Set. Nov 4 12:22:15.696981 systemd-networkd[1497]: lo: Link UP Nov 4 12:22:15.696988 systemd-networkd[1497]: lo: Gained carrier Nov 4 12:22:15.698457 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 4 12:22:15.700347 systemd[1]: Reached target network.target - Network. Nov 4 12:22:15.703321 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 4 12:22:15.705542 systemd-networkd[1497]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 12:22:15.705618 systemd-networkd[1497]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 4 12:22:15.706165 systemd-networkd[1497]: eth0: Link UP Nov 4 12:22:15.706280 systemd-networkd[1497]: eth0: Gained carrier Nov 4 12:22:15.706294 systemd-networkd[1497]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 4 12:22:15.706450 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 4 12:22:15.712161 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 4 12:22:15.717637 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 4 12:22:15.721168 systemd-networkd[1497]: eth0: DHCPv4 address 10.0.0.89/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 4 12:22:15.722495 systemd-timesyncd[1498]: Network configuration changed, trying to establish connection. Nov 4 12:22:16.199685 systemd-timesyncd[1498]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 4 12:22:16.199805 systemd-timesyncd[1498]: Initial clock synchronization to Tue 2025-11-04 12:22:16.199545 UTC. Nov 4 12:22:16.200203 systemd-resolved[1279]: Clock change detected. Flushing caches. Nov 4 12:22:16.206330 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 4 12:22:16.209514 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 4 12:22:16.269300 ldconfig[1421]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 4 12:22:16.273360 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 4 12:22:16.282524 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 4 12:22:16.285135 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 4 12:22:16.301567 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 4 12:22:16.326111 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 4 12:22:16.329659 systemd[1]: Reached target sysinit.target - System Initialization. Nov 4 12:22:16.330800 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 4 12:22:16.332056 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 4 12:22:16.333453 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 4 12:22:16.334665 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 4 12:22:16.335892 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 4 12:22:16.337129 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 4 12:22:16.337162 systemd[1]: Reached target paths.target - Path Units. Nov 4 12:22:16.338090 systemd[1]: Reached target timers.target - Timer Units. Nov 4 12:22:16.339692 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 4 12:22:16.341935 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 4 12:22:16.344738 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 4 12:22:16.346125 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 4 12:22:16.347427 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 4 12:22:16.350302 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 4 12:22:16.351733 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 4 12:22:16.353399 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 4 12:22:16.354512 systemd[1]: Reached target sockets.target - Socket Units. Nov 4 12:22:16.355468 systemd[1]: Reached target basic.target - Basic System. Nov 4 12:22:16.356376 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 4 12:22:16.356404 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 4 12:22:16.357233 systemd[1]: Starting containerd.service - containerd container runtime... Nov 4 12:22:16.359053 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 4 12:22:16.360868 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 4 12:22:16.362758 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 4 12:22:16.364553 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 4 12:22:16.365551 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 4 12:22:16.368405 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 4 12:22:16.369534 jq[1559]: false Nov 4 12:22:16.370089 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 4 12:22:16.371806 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 4 12:22:16.374433 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 4 12:22:16.377557 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 4 12:22:16.378511 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 4 12:22:16.378878 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 4 12:22:16.379481 extend-filesystems[1560]: Found /dev/vda6 Nov 4 12:22:16.380579 systemd[1]: Starting update-engine.service - Update Engine... Nov 4 12:22:16.383414 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 4 12:22:16.385598 extend-filesystems[1560]: Found /dev/vda9 Nov 4 12:22:16.386062 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 4 12:22:16.389925 extend-filesystems[1560]: Checking size of /dev/vda9 Nov 4 12:22:16.390705 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 4 12:22:16.390858 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 4 12:22:16.391077 systemd[1]: motdgen.service: Deactivated successfully. Nov 4 12:22:16.391234 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 4 12:22:16.394652 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 4 12:22:16.394830 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 4 12:22:16.400344 extend-filesystems[1560]: Resized partition /dev/vda9 Nov 4 12:22:16.404695 extend-filesystems[1593]: resize2fs 1.47.3 (8-Jul-2025) Nov 4 12:22:16.406509 update_engine[1573]: I20251104 12:22:16.406298 1573 main.cc:92] Flatcar Update Engine starting Nov 4 12:22:16.408449 jq[1575]: true Nov 4 12:22:16.416539 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 4 12:22:16.424179 jq[1603]: true Nov 4 12:22:16.425794 tar[1585]: linux-arm64/LICENSE Nov 4 12:22:16.426119 tar[1585]: linux-arm64/helm Nov 4 12:22:16.450707 dbus-daemon[1557]: [system] SELinux support is enabled Nov 4 12:22:16.451228 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 4 12:22:16.455318 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 4 12:22:16.455347 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 4 12:22:16.457823 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 4 12:22:16.457846 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 4 12:22:16.458302 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 4 12:22:16.473037 update_engine[1573]: I20251104 12:22:16.460217 1573 update_check_scheduler.cc:74] Next update check in 9m5s Nov 4 12:22:16.462014 systemd[1]: Started update-engine.service - Update Engine. Nov 4 12:22:16.466436 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 4 12:22:16.473660 extend-filesystems[1593]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 4 12:22:16.473660 extend-filesystems[1593]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 4 12:22:16.473660 extend-filesystems[1593]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 4 12:22:16.473556 systemd-logind[1570]: Watching system buttons on /dev/input/event0 (Power Button) Nov 4 12:22:16.480397 bash[1622]: Updated "/home/core/.ssh/authorized_keys" Nov 4 12:22:16.480462 extend-filesystems[1560]: Resized filesystem in /dev/vda9 Nov 4 12:22:16.474411 systemd-logind[1570]: New seat seat0. Nov 4 12:22:16.474819 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 4 12:22:16.475554 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 4 12:22:16.480444 systemd[1]: Started systemd-logind.service - User Login Management. Nov 4 12:22:16.482436 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 4 12:22:16.486976 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 4 12:22:16.524421 locksmithd[1623]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 4 12:22:16.592051 containerd[1589]: time="2025-11-04T12:22:16Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 4 12:22:16.592634 containerd[1589]: time="2025-11-04T12:22:16.592609250Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 4 12:22:16.601317 containerd[1589]: time="2025-11-04T12:22:16.601283610Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.28µs" Nov 4 12:22:16.602608 containerd[1589]: time="2025-11-04T12:22:16.601410370Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 4 12:22:16.602608 containerd[1589]: time="2025-11-04T12:22:16.601434010Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 4 12:22:16.602608 containerd[1589]: time="2025-11-04T12:22:16.601569170Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 4 12:22:16.602608 containerd[1589]: time="2025-11-04T12:22:16.601584610Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 4 12:22:16.602608 containerd[1589]: time="2025-11-04T12:22:16.601604850Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 12:22:16.602608 containerd[1589]: time="2025-11-04T12:22:16.601647170Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 4 12:22:16.602608 containerd[1589]: time="2025-11-04T12:22:16.601657450Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 12:22:16.602608 containerd[1589]: time="2025-11-04T12:22:16.601826850Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 4 12:22:16.602608 containerd[1589]: time="2025-11-04T12:22:16.601839090Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 12:22:16.602608 containerd[1589]: time="2025-11-04T12:22:16.601849330Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 4 12:22:16.602608 containerd[1589]: time="2025-11-04T12:22:16.601856930Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 4 12:22:16.602608 containerd[1589]: time="2025-11-04T12:22:16.601915610Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 4 12:22:16.602861 containerd[1589]: time="2025-11-04T12:22:16.602084690Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 12:22:16.602861 containerd[1589]: time="2025-11-04T12:22:16.602109650Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 4 12:22:16.602861 containerd[1589]: time="2025-11-04T12:22:16.602118610Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 4 12:22:16.602861 containerd[1589]: time="2025-11-04T12:22:16.602149210Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 4 12:22:16.602861 containerd[1589]: time="2025-11-04T12:22:16.602367290Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 4 12:22:16.602861 containerd[1589]: time="2025-11-04T12:22:16.602424170Z" level=info msg="metadata content store policy set" policy=shared Nov 4 12:22:16.606202 containerd[1589]: time="2025-11-04T12:22:16.606178210Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 4 12:22:16.606329 containerd[1589]: time="2025-11-04T12:22:16.606314530Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 4 12:22:16.606385 containerd[1589]: time="2025-11-04T12:22:16.606373010Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 4 12:22:16.606435 containerd[1589]: time="2025-11-04T12:22:16.606422610Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 4 12:22:16.606487 containerd[1589]: time="2025-11-04T12:22:16.606474690Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 4 12:22:16.606562 containerd[1589]: time="2025-11-04T12:22:16.606548690Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 4 12:22:16.606612 containerd[1589]: time="2025-11-04T12:22:16.606601290Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 4 12:22:16.606662 containerd[1589]: time="2025-11-04T12:22:16.606650170Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 4 12:22:16.606714 containerd[1589]: time="2025-11-04T12:22:16.606702010Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 4 12:22:16.606764 containerd[1589]: time="2025-11-04T12:22:16.606752290Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 4 12:22:16.606813 containerd[1589]: time="2025-11-04T12:22:16.606800610Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 4 12:22:16.606871 containerd[1589]: time="2025-11-04T12:22:16.606857730Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 4 12:22:16.607020 containerd[1589]: time="2025-11-04T12:22:16.607003610Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 4 12:22:16.607084 containerd[1589]: time="2025-11-04T12:22:16.607071650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 4 12:22:16.607139 containerd[1589]: time="2025-11-04T12:22:16.607126010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 4 12:22:16.607200 containerd[1589]: time="2025-11-04T12:22:16.607187970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 4 12:22:16.607249 containerd[1589]: time="2025-11-04T12:22:16.607238050Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 4 12:22:16.607310 containerd[1589]: time="2025-11-04T12:22:16.607298290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 4 12:22:16.607374 containerd[1589]: time="2025-11-04T12:22:16.607361130Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 4 12:22:16.607431 containerd[1589]: time="2025-11-04T12:22:16.607418490Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 4 12:22:16.607481 containerd[1589]: time="2025-11-04T12:22:16.607470290Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 4 12:22:16.607542 containerd[1589]: time="2025-11-04T12:22:16.607530730Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 4 12:22:16.607598 containerd[1589]: time="2025-11-04T12:22:16.607585090Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 4 12:22:16.607820 containerd[1589]: time="2025-11-04T12:22:16.607805770Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 4 12:22:16.607878 containerd[1589]: time="2025-11-04T12:22:16.607867450Z" level=info msg="Start snapshots syncer" Nov 4 12:22:16.607945 containerd[1589]: time="2025-11-04T12:22:16.607931490Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 4 12:22:16.608198 containerd[1589]: time="2025-11-04T12:22:16.608164090Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 4 12:22:16.608350 containerd[1589]: time="2025-11-04T12:22:16.608333890Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 4 12:22:16.608483 containerd[1589]: time="2025-11-04T12:22:16.608467330Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 4 12:22:16.608649 containerd[1589]: time="2025-11-04T12:22:16.608631850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 4 12:22:16.608720 containerd[1589]: time="2025-11-04T12:22:16.608706530Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 4 12:22:16.608771 containerd[1589]: time="2025-11-04T12:22:16.608757850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 4 12:22:16.608824 containerd[1589]: time="2025-11-04T12:22:16.608811370Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 4 12:22:16.608876 containerd[1589]: time="2025-11-04T12:22:16.608863330Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 4 12:22:16.608935 containerd[1589]: time="2025-11-04T12:22:16.608922770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 4 12:22:16.608985 containerd[1589]: time="2025-11-04T12:22:16.608973250Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 4 12:22:16.609057 containerd[1589]: time="2025-11-04T12:22:16.609043610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 4 12:22:16.609110 containerd[1589]: time="2025-11-04T12:22:16.609096850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 4 12:22:16.609162 containerd[1589]: time="2025-11-04T12:22:16.609149250Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 4 12:22:16.609243 containerd[1589]: time="2025-11-04T12:22:16.609228530Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 12:22:16.609310 containerd[1589]: time="2025-11-04T12:22:16.609296330Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 4 12:22:16.609369 containerd[1589]: time="2025-11-04T12:22:16.609355490Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 12:22:16.609626 containerd[1589]: time="2025-11-04T12:22:16.609593330Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 4 12:22:16.609660 containerd[1589]: time="2025-11-04T12:22:16.609629090Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 4 12:22:16.609679 containerd[1589]: time="2025-11-04T12:22:16.609655770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 4 12:22:16.609679 containerd[1589]: time="2025-11-04T12:22:16.609675850Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 4 12:22:16.609775 containerd[1589]: time="2025-11-04T12:22:16.609758850Z" level=info msg="runtime interface created" Nov 4 12:22:16.609775 containerd[1589]: time="2025-11-04T12:22:16.609768850Z" level=info msg="created NRI interface" Nov 4 12:22:16.609822 containerd[1589]: time="2025-11-04T12:22:16.609781010Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 4 12:22:16.609822 containerd[1589]: time="2025-11-04T12:22:16.609798090Z" level=info msg="Connect containerd service" Nov 4 12:22:16.609855 containerd[1589]: time="2025-11-04T12:22:16.609842450Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 4 12:22:16.612051 containerd[1589]: time="2025-11-04T12:22:16.612027490Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 12:22:16.681931 containerd[1589]: time="2025-11-04T12:22:16.681760130Z" level=info msg="Start subscribing containerd event" Nov 4 12:22:16.681931 containerd[1589]: time="2025-11-04T12:22:16.681838250Z" level=info msg="Start recovering state" Nov 4 12:22:16.681931 containerd[1589]: time="2025-11-04T12:22:16.681932410Z" level=info msg="Start event monitor" Nov 4 12:22:16.682054 containerd[1589]: time="2025-11-04T12:22:16.681946490Z" level=info msg="Start cni network conf syncer for default" Nov 4 12:22:16.682054 containerd[1589]: time="2025-11-04T12:22:16.681952890Z" level=info msg="Start streaming server" Nov 4 12:22:16.682054 containerd[1589]: time="2025-11-04T12:22:16.681961290Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 4 12:22:16.682054 containerd[1589]: time="2025-11-04T12:22:16.681968250Z" level=info msg="runtime interface starting up..." Nov 4 12:22:16.682054 containerd[1589]: time="2025-11-04T12:22:16.681973370Z" level=info msg="starting plugins..." Nov 4 12:22:16.682054 containerd[1589]: time="2025-11-04T12:22:16.681984730Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 4 12:22:16.682391 containerd[1589]: time="2025-11-04T12:22:16.682367090Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 4 12:22:16.682508 containerd[1589]: time="2025-11-04T12:22:16.682483730Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 4 12:22:16.682619 containerd[1589]: time="2025-11-04T12:22:16.682606130Z" level=info msg="containerd successfully booted in 0.091080s" Nov 4 12:22:16.682743 systemd[1]: Started containerd.service - containerd container runtime. Nov 4 12:22:16.750089 tar[1585]: linux-arm64/README.md Nov 4 12:22:16.765228 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 4 12:22:16.983418 sshd_keygen[1582]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 4 12:22:17.003348 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 4 12:22:17.005950 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 4 12:22:17.026161 systemd[1]: issuegen.service: Deactivated successfully. Nov 4 12:22:17.026377 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 4 12:22:17.028803 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 4 12:22:17.043186 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 4 12:22:17.045735 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 4 12:22:17.047742 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 4 12:22:17.049010 systemd[1]: Reached target getty.target - Login Prompts. Nov 4 12:22:18.167460 systemd-networkd[1497]: eth0: Gained IPv6LL Nov 4 12:22:18.172807 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 4 12:22:18.174520 systemd[1]: Reached target network-online.target - Network is Online. Nov 4 12:22:18.177671 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 4 12:22:18.179970 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:22:18.192852 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 4 12:22:18.206757 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 4 12:22:18.206955 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 4 12:22:18.208869 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 4 12:22:18.212201 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 4 12:22:18.701069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:22:18.702767 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 4 12:22:18.704777 systemd[1]: Startup finished in 1.181s (kernel) + 5.138s (initrd) + 4.040s (userspace) = 10.360s. Nov 4 12:22:18.704880 (kubelet)[1697]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 12:22:19.002199 kubelet[1697]: E1104 12:22:19.002081 1697 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 12:22:19.004415 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 12:22:19.004556 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 12:22:19.006353 systemd[1]: kubelet.service: Consumed 679ms CPU time, 248.1M memory peak. Nov 4 12:22:21.070520 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 4 12:22:21.071492 systemd[1]: Started sshd@0-10.0.0.89:22-10.0.0.1:42378.service - OpenSSH per-connection server daemon (10.0.0.1:42378). Nov 4 12:22:21.148597 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 42378 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:22:21.150324 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:22:21.156086 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 4 12:22:21.156974 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 4 12:22:21.161820 systemd-logind[1570]: New session 1 of user core. Nov 4 12:22:21.185539 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 4 12:22:21.187926 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 4 12:22:21.204235 (systemd)[1715]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 4 12:22:21.206288 systemd-logind[1570]: New session c1 of user core. Nov 4 12:22:21.306040 systemd[1715]: Queued start job for default target default.target. Nov 4 12:22:21.319244 systemd[1715]: Created slice app.slice - User Application Slice. Nov 4 12:22:21.319385 systemd[1715]: Reached target paths.target - Paths. Nov 4 12:22:21.319521 systemd[1715]: Reached target timers.target - Timers. Nov 4 12:22:21.320730 systemd[1715]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 4 12:22:21.331229 systemd[1715]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 4 12:22:21.331320 systemd[1715]: Reached target sockets.target - Sockets. Nov 4 12:22:21.331363 systemd[1715]: Reached target basic.target - Basic System. Nov 4 12:22:21.331392 systemd[1715]: Reached target default.target - Main User Target. Nov 4 12:22:21.331416 systemd[1715]: Startup finished in 119ms. Nov 4 12:22:21.331572 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 4 12:22:21.332753 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 4 12:22:21.395916 systemd[1]: Started sshd@1-10.0.0.89:22-10.0.0.1:42390.service - OpenSSH per-connection server daemon (10.0.0.1:42390). Nov 4 12:22:21.445338 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 42390 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:22:21.446548 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:22:21.450009 systemd-logind[1570]: New session 2 of user core. Nov 4 12:22:21.462485 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 4 12:22:21.513858 sshd[1729]: Connection closed by 10.0.0.1 port 42390 Nov 4 12:22:21.515583 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Nov 4 12:22:21.530334 systemd[1]: Started sshd@2-10.0.0.89:22-10.0.0.1:42404.service - OpenSSH per-connection server daemon (10.0.0.1:42404). Nov 4 12:22:21.533812 systemd[1]: sshd@1-10.0.0.89:22-10.0.0.1:42390.service: Deactivated successfully. Nov 4 12:22:21.535189 systemd[1]: session-2.scope: Deactivated successfully. Nov 4 12:22:21.541240 systemd-logind[1570]: Session 2 logged out. Waiting for processes to exit. Nov 4 12:22:21.542369 systemd-logind[1570]: Removed session 2. Nov 4 12:22:21.584188 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 42404 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:22:21.585935 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:22:21.592150 systemd-logind[1570]: New session 3 of user core. Nov 4 12:22:21.602452 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 4 12:22:21.651059 sshd[1738]: Connection closed by 10.0.0.1 port 42404 Nov 4 12:22:21.651382 sshd-session[1732]: pam_unix(sshd:session): session closed for user core Nov 4 12:22:21.666198 systemd[1]: sshd@2-10.0.0.89:22-10.0.0.1:42404.service: Deactivated successfully. Nov 4 12:22:21.667788 systemd[1]: session-3.scope: Deactivated successfully. Nov 4 12:22:21.669007 systemd-logind[1570]: Session 3 logged out. Waiting for processes to exit. Nov 4 12:22:21.671078 systemd[1]: Started sshd@3-10.0.0.89:22-10.0.0.1:42408.service - OpenSSH per-connection server daemon (10.0.0.1:42408). Nov 4 12:22:21.675198 systemd-logind[1570]: Removed session 3. Nov 4 12:22:21.740677 sshd[1744]: Accepted publickey for core from 10.0.0.1 port 42408 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:22:21.741970 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:22:21.752416 systemd-logind[1570]: New session 4 of user core. Nov 4 12:22:21.762492 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 4 12:22:21.824603 sshd[1747]: Connection closed by 10.0.0.1 port 42408 Nov 4 12:22:21.825030 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Nov 4 12:22:21.842823 systemd[1]: sshd@3-10.0.0.89:22-10.0.0.1:42408.service: Deactivated successfully. Nov 4 12:22:21.845655 systemd[1]: session-4.scope: Deactivated successfully. Nov 4 12:22:21.846616 systemd-logind[1570]: Session 4 logged out. Waiting for processes to exit. Nov 4 12:22:21.848126 systemd[1]: Started sshd@4-10.0.0.89:22-10.0.0.1:42424.service - OpenSSH per-connection server daemon (10.0.0.1:42424). Nov 4 12:22:21.851622 systemd-logind[1570]: Removed session 4. Nov 4 12:22:21.912565 sshd[1753]: Accepted publickey for core from 10.0.0.1 port 42424 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:22:21.913675 sshd-session[1753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:22:21.919387 systemd-logind[1570]: New session 5 of user core. Nov 4 12:22:21.926441 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 4 12:22:21.984121 sudo[1757]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 4 12:22:21.984399 sudo[1757]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 12:22:22.002645 sudo[1757]: pam_unix(sudo:session): session closed for user root Nov 4 12:22:22.004311 sshd[1756]: Connection closed by 10.0.0.1 port 42424 Nov 4 12:22:22.004890 sshd-session[1753]: pam_unix(sshd:session): session closed for user core Nov 4 12:22:22.015395 systemd[1]: sshd@4-10.0.0.89:22-10.0.0.1:42424.service: Deactivated successfully. Nov 4 12:22:22.017506 systemd[1]: session-5.scope: Deactivated successfully. Nov 4 12:22:22.018489 systemd-logind[1570]: Session 5 logged out. Waiting for processes to exit. Nov 4 12:22:22.023370 systemd[1]: Started sshd@5-10.0.0.89:22-10.0.0.1:42434.service - OpenSSH per-connection server daemon (10.0.0.1:42434). Nov 4 12:22:22.024526 systemd-logind[1570]: Removed session 5. Nov 4 12:22:22.075487 sshd[1763]: Accepted publickey for core from 10.0.0.1 port 42434 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:22:22.076259 sshd-session[1763]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:22:22.080249 systemd-logind[1570]: New session 6 of user core. Nov 4 12:22:22.094474 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 4 12:22:22.146933 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 4 12:22:22.147556 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 12:22:22.165487 sudo[1768]: pam_unix(sudo:session): session closed for user root Nov 4 12:22:22.174187 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 4 12:22:22.174876 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 12:22:22.183693 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 4 12:22:22.231164 augenrules[1790]: No rules Nov 4 12:22:22.232181 systemd[1]: audit-rules.service: Deactivated successfully. Nov 4 12:22:22.232441 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 4 12:22:22.234468 sudo[1767]: pam_unix(sudo:session): session closed for user root Nov 4 12:22:22.235798 sshd[1766]: Connection closed by 10.0.0.1 port 42434 Nov 4 12:22:22.236164 sshd-session[1763]: pam_unix(sshd:session): session closed for user core Nov 4 12:22:22.248147 systemd[1]: sshd@5-10.0.0.89:22-10.0.0.1:42434.service: Deactivated successfully. Nov 4 12:22:22.250731 systemd[1]: session-6.scope: Deactivated successfully. Nov 4 12:22:22.253415 systemd-logind[1570]: Session 6 logged out. Waiting for processes to exit. Nov 4 12:22:22.254699 systemd[1]: Started sshd@6-10.0.0.89:22-10.0.0.1:42438.service - OpenSSH per-connection server daemon (10.0.0.1:42438). Nov 4 12:22:22.255837 systemd-logind[1570]: Removed session 6. Nov 4 12:22:22.322381 sshd[1799]: Accepted publickey for core from 10.0.0.1 port 42438 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:22:22.323730 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:22:22.327982 systemd-logind[1570]: New session 7 of user core. Nov 4 12:22:22.334418 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 4 12:22:22.386769 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 4 12:22:22.387025 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 4 12:22:22.699204 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 4 12:22:22.719582 (dockerd)[1823]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 4 12:22:22.932186 dockerd[1823]: time="2025-11-04T12:22:22.932092850Z" level=info msg="Starting up" Nov 4 12:22:22.936326 dockerd[1823]: time="2025-11-04T12:22:22.933060570Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 4 12:22:22.943419 dockerd[1823]: time="2025-11-04T12:22:22.943363610Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 4 12:22:23.071416 dockerd[1823]: time="2025-11-04T12:22:23.071305890Z" level=info msg="Loading containers: start." Nov 4 12:22:23.079295 kernel: Initializing XFRM netlink socket Nov 4 12:22:23.282830 systemd-networkd[1497]: docker0: Link UP Nov 4 12:22:23.286069 dockerd[1823]: time="2025-11-04T12:22:23.286023810Z" level=info msg="Loading containers: done." Nov 4 12:22:23.298183 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1713622584-merged.mount: Deactivated successfully. Nov 4 12:22:23.299186 dockerd[1823]: time="2025-11-04T12:22:23.299142130Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 4 12:22:23.299256 dockerd[1823]: time="2025-11-04T12:22:23.299225810Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 4 12:22:23.299342 dockerd[1823]: time="2025-11-04T12:22:23.299324490Z" level=info msg="Initializing buildkit" Nov 4 12:22:23.319496 dockerd[1823]: time="2025-11-04T12:22:23.319461170Z" level=info msg="Completed buildkit initialization" Nov 4 12:22:23.325997 dockerd[1823]: time="2025-11-04T12:22:23.325373770Z" level=info msg="Daemon has completed initialization" Nov 4 12:22:23.325681 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 4 12:22:23.326235 dockerd[1823]: time="2025-11-04T12:22:23.325468090Z" level=info msg="API listen on /run/docker.sock" Nov 4 12:22:23.736574 containerd[1589]: time="2025-11-04T12:22:23.736478650Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\"" Nov 4 12:22:24.214042 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1281324808.mount: Deactivated successfully. Nov 4 12:22:25.270933 containerd[1589]: time="2025-11-04T12:22:25.270865530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:25.272168 containerd[1589]: time="2025-11-04T12:22:25.271826090Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.1: active requests=0, bytes read=24574512" Nov 4 12:22:25.272781 containerd[1589]: time="2025-11-04T12:22:25.272748970Z" level=info msg="ImageCreate event name:\"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:25.276172 containerd[1589]: time="2025-11-04T12:22:25.276124130Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:25.277137 containerd[1589]: time="2025-11-04T12:22:25.277094770Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.1\" with image id \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b9d7c117f8ac52bed4b13aeed973dc5198f9d93a926e6fe9e0b384f155baa902\", size \"24571109\" in 1.54058172s" Nov 4 12:22:25.277175 containerd[1589]: time="2025-11-04T12:22:25.277140770Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.1\" returns image reference \"sha256:43911e833d64d4f30460862fc0c54bb61999d60bc7d063feca71e9fc610d5196\"" Nov 4 12:22:25.278108 containerd[1589]: time="2025-11-04T12:22:25.278051250Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\"" Nov 4 12:22:26.310521 containerd[1589]: time="2025-11-04T12:22:26.310463650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:26.311024 containerd[1589]: time="2025-11-04T12:22:26.310975610Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.1: active requests=0, bytes read=19132145" Nov 4 12:22:26.311956 containerd[1589]: time="2025-11-04T12:22:26.311904810Z" level=info msg="ImageCreate event name:\"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:26.314399 containerd[1589]: time="2025-11-04T12:22:26.314371850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:26.315414 containerd[1589]: time="2025-11-04T12:22:26.315387530Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.1\" with image id \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2bf47c1b01f51e8963bf2327390883c9fa4ed03ea1b284500a2cba17ce303e89\", size \"20720058\" in 1.0372996s" Nov 4 12:22:26.315477 containerd[1589]: time="2025-11-04T12:22:26.315427970Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.1\" returns image reference \"sha256:7eb2c6ff0c5a768fd309321bc2ade0e4e11afcf4f2017ef1d0ff00d91fdf992a\"" Nov 4 12:22:26.315841 containerd[1589]: time="2025-11-04T12:22:26.315817730Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\"" Nov 4 12:22:27.252132 containerd[1589]: time="2025-11-04T12:22:27.252086650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:27.252974 containerd[1589]: time="2025-11-04T12:22:27.252947210Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.1: active requests=0, bytes read=14191886" Nov 4 12:22:27.253933 containerd[1589]: time="2025-11-04T12:22:27.253488330Z" level=info msg="ImageCreate event name:\"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:27.255918 containerd[1589]: time="2025-11-04T12:22:27.255887370Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:27.257669 containerd[1589]: time="2025-11-04T12:22:27.257643610Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.1\" with image id \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:6e9fbc4e25a576483e6a233976353a66e4d77eb5d0530e9118e94b7d46fb3500\", size \"15779817\" in 941.79516ms" Nov 4 12:22:27.257719 containerd[1589]: time="2025-11-04T12:22:27.257673450Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.1\" returns image reference \"sha256:b5f57ec6b98676d815366685a0422bd164ecf0732540b79ac51b1186cef97ff0\"" Nov 4 12:22:27.258298 containerd[1589]: time="2025-11-04T12:22:27.258107570Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\"" Nov 4 12:22:28.268339 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1262788939.mount: Deactivated successfully. Nov 4 12:22:28.435074 containerd[1589]: time="2025-11-04T12:22:28.434599570Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:28.435530 containerd[1589]: time="2025-11-04T12:22:28.435502450Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.1: active requests=0, bytes read=22789030" Nov 4 12:22:28.436317 containerd[1589]: time="2025-11-04T12:22:28.436293410Z" level=info msg="ImageCreate event name:\"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:28.438428 containerd[1589]: time="2025-11-04T12:22:28.438395250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:28.439287 containerd[1589]: time="2025-11-04T12:22:28.439238650Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.1\" with image id \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\", repo tag \"registry.k8s.io/kube-proxy:v1.34.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:913cc83ca0b5588a81d86ce8eedeb3ed1e9c1326e81852a1ea4f622b74ff749a\", size \"22788047\" in 1.18110352s" Nov 4 12:22:28.439287 containerd[1589]: time="2025-11-04T12:22:28.439273090Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.1\" returns image reference \"sha256:05baa95f5142d87797a2bc1d3d11edfb0bf0a9236d436243d15061fae8b58cb9\"" Nov 4 12:22:28.439901 containerd[1589]: time="2025-11-04T12:22:28.439879850Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Nov 4 12:22:28.912102 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2425330402.mount: Deactivated successfully. Nov 4 12:22:29.254942 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 4 12:22:29.256367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:22:29.378184 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:22:29.381419 (kubelet)[2172]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 4 12:22:29.421999 kubelet[2172]: E1104 12:22:29.421936 2172 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 4 12:22:29.427681 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 4 12:22:29.427805 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 4 12:22:29.428083 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.9M memory peak. Nov 4 12:22:29.972563 containerd[1589]: time="2025-11-04T12:22:29.972511170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:29.973512 containerd[1589]: time="2025-11-04T12:22:29.973486370Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395408" Nov 4 12:22:29.974295 containerd[1589]: time="2025-11-04T12:22:29.974242410Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:29.977668 containerd[1589]: time="2025-11-04T12:22:29.977608730Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:29.978718 containerd[1589]: time="2025-11-04T12:22:29.978515050Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.53845056s" Nov 4 12:22:29.978718 containerd[1589]: time="2025-11-04T12:22:29.978549490Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Nov 4 12:22:29.979024 containerd[1589]: time="2025-11-04T12:22:29.978951930Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Nov 4 12:22:30.437630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1138846311.mount: Deactivated successfully. Nov 4 12:22:30.441814 containerd[1589]: time="2025-11-04T12:22:30.441770170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:30.442870 containerd[1589]: time="2025-11-04T12:22:30.442840090Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268711" Nov 4 12:22:30.443872 containerd[1589]: time="2025-11-04T12:22:30.443821170Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:30.445791 containerd[1589]: time="2025-11-04T12:22:30.445744490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:30.446654 containerd[1589]: time="2025-11-04T12:22:30.446606650Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 467.62392ms" Nov 4 12:22:30.446654 containerd[1589]: time="2025-11-04T12:22:30.446645770Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Nov 4 12:22:30.447210 containerd[1589]: time="2025-11-04T12:22:30.447185130Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Nov 4 12:22:33.300762 containerd[1589]: time="2025-11-04T12:22:33.300705890Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:33.301630 containerd[1589]: time="2025-11-04T12:22:33.301600930Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=97410768" Nov 4 12:22:33.302375 containerd[1589]: time="2025-11-04T12:22:33.302334930Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:33.305552 containerd[1589]: time="2025-11-04T12:22:33.305527410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:33.307017 containerd[1589]: time="2025-11-04T12:22:33.306985930Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 2.85977168s" Nov 4 12:22:33.307061 containerd[1589]: time="2025-11-04T12:22:33.307023090Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Nov 4 12:22:39.492930 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 4 12:22:39.495085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:22:39.615205 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 12:22:39.615433 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 12:22:39.615773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:22:39.615948 systemd[1]: kubelet.service: Consumed 85ms CPU time, 95.1M memory peak. Nov 4 12:22:39.617768 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:22:39.641118 systemd[1]: Reload requested from client PID 2269 ('systemctl') (unit session-7.scope)... Nov 4 12:22:39.641135 systemd[1]: Reloading... Nov 4 12:22:39.720310 zram_generator::config[2316]: No configuration found. Nov 4 12:22:40.002817 systemd[1]: Reloading finished in 361 ms. Nov 4 12:22:40.062866 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 4 12:22:40.062948 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 4 12:22:40.063183 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:22:40.063238 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.1M memory peak. Nov 4 12:22:40.064727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:22:40.178140 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:22:40.195575 (kubelet)[2358]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 12:22:40.227081 kubelet[2358]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 12:22:40.227081 kubelet[2358]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 12:22:40.227608 kubelet[2358]: I1104 12:22:40.227567 2358 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 12:22:40.691763 kubelet[2358]: I1104 12:22:40.691723 2358 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 12:22:40.691763 kubelet[2358]: I1104 12:22:40.691756 2358 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 12:22:40.691915 kubelet[2358]: I1104 12:22:40.691783 2358 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 12:22:40.691915 kubelet[2358]: I1104 12:22:40.691789 2358 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 12:22:40.692054 kubelet[2358]: I1104 12:22:40.692041 2358 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 12:22:40.774176 kubelet[2358]: E1104 12:22:40.774132 2358 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.89:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Nov 4 12:22:40.774805 kubelet[2358]: I1104 12:22:40.774789 2358 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 12:22:40.777976 kubelet[2358]: I1104 12:22:40.777956 2358 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 12:22:40.780844 kubelet[2358]: I1104 12:22:40.780827 2358 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 12:22:40.781218 kubelet[2358]: I1104 12:22:40.781192 2358 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 12:22:40.781560 kubelet[2358]: I1104 12:22:40.781288 2358 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 12:22:40.781754 kubelet[2358]: I1104 12:22:40.781689 2358 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 12:22:40.781754 kubelet[2358]: I1104 12:22:40.781704 2358 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 12:22:40.781979 kubelet[2358]: I1104 12:22:40.781884 2358 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 12:22:40.784094 kubelet[2358]: I1104 12:22:40.784077 2358 state_mem.go:36] "Initialized new in-memory state store" Nov 4 12:22:40.785514 kubelet[2358]: I1104 12:22:40.785386 2358 kubelet.go:475] "Attempting to sync node with API server" Nov 4 12:22:40.785514 kubelet[2358]: I1104 12:22:40.785433 2358 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 12:22:40.785514 kubelet[2358]: I1104 12:22:40.785455 2358 kubelet.go:387] "Adding apiserver pod source" Nov 4 12:22:40.786289 kubelet[2358]: E1104 12:22:40.785916 2358 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.89:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Nov 4 12:22:40.786593 kubelet[2358]: I1104 12:22:40.786577 2358 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 12:22:40.787163 kubelet[2358]: E1104 12:22:40.787134 2358 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 12:22:40.788618 kubelet[2358]: I1104 12:22:40.788595 2358 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 12:22:40.789257 kubelet[2358]: I1104 12:22:40.789223 2358 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 12:22:40.789257 kubelet[2358]: I1104 12:22:40.789257 2358 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 12:22:40.789344 kubelet[2358]: W1104 12:22:40.789322 2358 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 4 12:22:40.794129 kubelet[2358]: I1104 12:22:40.793921 2358 server.go:1262] "Started kubelet" Nov 4 12:22:40.794129 kubelet[2358]: I1104 12:22:40.794091 2358 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 12:22:40.794977 kubelet[2358]: I1104 12:22:40.794941 2358 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 12:22:40.795039 kubelet[2358]: I1104 12:22:40.794969 2358 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 12:22:40.795039 kubelet[2358]: I1104 12:22:40.795017 2358 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 12:22:40.795977 kubelet[2358]: I1104 12:22:40.795238 2358 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 12:22:40.795977 kubelet[2358]: I1104 12:22:40.795457 2358 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 12:22:40.796474 kubelet[2358]: I1104 12:22:40.794948 2358 server.go:310] "Adding debug handlers to kubelet server" Nov 4 12:22:40.797685 kubelet[2358]: E1104 12:22:40.797655 2358 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 12:22:40.797737 kubelet[2358]: I1104 12:22:40.797692 2358 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 12:22:40.797859 kubelet[2358]: I1104 12:22:40.797839 2358 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 12:22:40.797924 kubelet[2358]: I1104 12:22:40.797910 2358 reconciler.go:29] "Reconciler: start to sync state" Nov 4 12:22:40.798366 kubelet[2358]: E1104 12:22:40.798260 2358 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 12:22:40.798781 kubelet[2358]: I1104 12:22:40.798757 2358 factory.go:223] Registration of the systemd container factory successfully Nov 4 12:22:40.799376 kubelet[2358]: I1104 12:22:40.799351 2358 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 12:22:40.799548 kubelet[2358]: E1104 12:22:40.799390 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="200ms" Nov 4 12:22:40.799605 kubelet[2358]: E1104 12:22:40.799588 2358 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 12:22:40.800262 kubelet[2358]: I1104 12:22:40.800236 2358 factory.go:223] Registration of the containerd container factory successfully Nov 4 12:22:40.800646 kubelet[2358]: E1104 12:22:40.799013 2358 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.89:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.89:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1874cd30205e6ad2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-04 12:22:40.79388949 +0000 UTC m=+0.595510721,LastTimestamp:2025-11-04 12:22:40.79388949 +0000 UTC m=+0.595510721,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 4 12:22:40.814643 kubelet[2358]: I1104 12:22:40.814620 2358 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 12:22:40.814643 kubelet[2358]: I1104 12:22:40.814635 2358 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 12:22:40.814788 kubelet[2358]: I1104 12:22:40.814658 2358 state_mem.go:36] "Initialized new in-memory state store" Nov 4 12:22:40.816729 kubelet[2358]: I1104 12:22:40.816705 2358 policy_none.go:49] "None policy: Start" Nov 4 12:22:40.816797 kubelet[2358]: I1104 12:22:40.816740 2358 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 12:22:40.816797 kubelet[2358]: I1104 12:22:40.816752 2358 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 12:22:40.816989 kubelet[2358]: I1104 12:22:40.816959 2358 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 12:22:40.818152 kubelet[2358]: I1104 12:22:40.818132 2358 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 12:22:40.818152 kubelet[2358]: I1104 12:22:40.818157 2358 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 12:22:40.818234 kubelet[2358]: I1104 12:22:40.818194 2358 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 12:22:40.818254 kubelet[2358]: E1104 12:22:40.818233 2358 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 12:22:40.818755 kubelet[2358]: E1104 12:22:40.818711 2358 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 12:22:40.820106 kubelet[2358]: I1104 12:22:40.820085 2358 policy_none.go:47] "Start" Nov 4 12:22:40.824317 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 4 12:22:40.841379 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 4 12:22:40.844367 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 4 12:22:40.869346 kubelet[2358]: E1104 12:22:40.869305 2358 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 12:22:40.869541 kubelet[2358]: I1104 12:22:40.869524 2358 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 12:22:40.869593 kubelet[2358]: I1104 12:22:40.869540 2358 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 12:22:40.869794 kubelet[2358]: I1104 12:22:40.869770 2358 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 12:22:40.871056 kubelet[2358]: E1104 12:22:40.871035 2358 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 12:22:40.871171 kubelet[2358]: E1104 12:22:40.871159 2358 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 4 12:22:40.928141 systemd[1]: Created slice kubepods-burstable-podd96001f3ec277ac20e6981e641b6135d.slice - libcontainer container kubepods-burstable-podd96001f3ec277ac20e6981e641b6135d.slice. Nov 4 12:22:40.936182 kubelet[2358]: E1104 12:22:40.936129 2358 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:22:40.939820 systemd[1]: Created slice kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice - libcontainer container kubepods-burstable-podce161b3b11c90b0b844f2e4f86b4e8cd.slice. Nov 4 12:22:40.941518 kubelet[2358]: E1104 12:22:40.941334 2358 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:22:40.954015 systemd[1]: Created slice kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice - libcontainer container kubepods-burstable-pod72ae43bf624d285361487631af8a6ba6.slice. Nov 4 12:22:40.958825 kubelet[2358]: E1104 12:22:40.958794 2358 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:22:40.970997 kubelet[2358]: I1104 12:22:40.970970 2358 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 12:22:40.971402 kubelet[2358]: E1104 12:22:40.971377 2358 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Nov 4 12:22:40.998898 kubelet[2358]: I1104 12:22:40.998834 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d96001f3ec277ac20e6981e641b6135d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d96001f3ec277ac20e6981e641b6135d\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:22:40.998898 kubelet[2358]: I1104 12:22:40.998873 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:22:40.999107 kubelet[2358]: I1104 12:22:40.999052 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:22:40.999215 kubelet[2358]: I1104 12:22:40.999095 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:22:40.999215 kubelet[2358]: I1104 12:22:40.999171 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 4 12:22:40.999215 kubelet[2358]: I1104 12:22:40.999185 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d96001f3ec277ac20e6981e641b6135d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d96001f3ec277ac20e6981e641b6135d\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:22:40.999421 kubelet[2358]: I1104 12:22:40.999200 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d96001f3ec277ac20e6981e641b6135d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d96001f3ec277ac20e6981e641b6135d\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:22:40.999421 kubelet[2358]: I1104 12:22:40.999368 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:22:40.999421 kubelet[2358]: I1104 12:22:40.999388 2358 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:22:41.000096 kubelet[2358]: E1104 12:22:41.000066 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="400ms" Nov 4 12:22:41.173106 kubelet[2358]: I1104 12:22:41.173079 2358 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 12:22:41.173467 kubelet[2358]: E1104 12:22:41.173442 2358 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Nov 4 12:22:41.239200 kubelet[2358]: E1104 12:22:41.239032 2358 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:41.240054 containerd[1589]: time="2025-11-04T12:22:41.240019490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d96001f3ec277ac20e6981e641b6135d,Namespace:kube-system,Attempt:0,}" Nov 4 12:22:41.243950 kubelet[2358]: E1104 12:22:41.243923 2358 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:41.244543 containerd[1589]: time="2025-11-04T12:22:41.244509770Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,}" Nov 4 12:22:41.261858 kubelet[2358]: E1104 12:22:41.261816 2358 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:41.262314 containerd[1589]: time="2025-11-04T12:22:41.262258010Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,}" Nov 4 12:22:41.401259 kubelet[2358]: E1104 12:22:41.401201 2358 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.89:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.89:6443: connect: connection refused" interval="800ms" Nov 4 12:22:41.574758 kubelet[2358]: I1104 12:22:41.574730 2358 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 12:22:41.575106 kubelet[2358]: E1104 12:22:41.575058 2358 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.89:6443/api/v1/nodes\": dial tcp 10.0.0.89:6443: connect: connection refused" node="localhost" Nov 4 12:22:41.606011 kubelet[2358]: E1104 12:22:41.605961 2358 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.89:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Nov 4 12:22:41.722759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1116761849.mount: Deactivated successfully. Nov 4 12:22:41.727651 containerd[1589]: time="2025-11-04T12:22:41.727597450Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 12:22:41.729388 containerd[1589]: time="2025-11-04T12:22:41.729351170Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 12:22:41.731717 containerd[1589]: time="2025-11-04T12:22:41.731672930Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 4 12:22:41.732343 containerd[1589]: time="2025-11-04T12:22:41.732295690Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 12:22:41.733974 containerd[1589]: time="2025-11-04T12:22:41.733929170Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 12:22:41.735468 containerd[1589]: time="2025-11-04T12:22:41.735434410Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Nov 4 12:22:41.735857 containerd[1589]: time="2025-11-04T12:22:41.735820330Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 12:22:41.737677 containerd[1589]: time="2025-11-04T12:22:41.737633050Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 4 12:22:41.739433 containerd[1589]: time="2025-11-04T12:22:41.739389650Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 475.1446ms" Nov 4 12:22:41.739906 containerd[1589]: time="2025-11-04T12:22:41.739870490Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 493.29324ms" Nov 4 12:22:41.740706 containerd[1589]: time="2025-11-04T12:22:41.740673130Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 497.94744ms" Nov 4 12:22:41.762070 containerd[1589]: time="2025-11-04T12:22:41.761995290Z" level=info msg="connecting to shim 1f3363ce9105344e25306fb1159f5f0bcd954eaeb4872a55a29a6234f11811ba" address="unix:///run/containerd/s/7eef44759ddecbb2c774e72dc4265e5d1b2aba24ec91ff6192c300cbd6250fb6" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:22:41.773010 containerd[1589]: time="2025-11-04T12:22:41.772963130Z" level=info msg="connecting to shim 8040a08f78b9d287d28af43d490f786ddef297480ba47504dd48ffe98a58755e" address="unix:///run/containerd/s/cb304d5e5737f9669ec8aed27117631b5b6b1bc1e9c8dcf949f21cd2bb90b588" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:22:41.775516 containerd[1589]: time="2025-11-04T12:22:41.775482010Z" level=info msg="connecting to shim 44d7f76e84ef325ec54221cae4ad95d3c97620cca4212d7eb09bae2fdf87c2bf" address="unix:///run/containerd/s/db98913d0c37dfc63f2747242a6632a1a305ac8a7c8e919550741005c22786f9" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:22:41.789438 systemd[1]: Started cri-containerd-1f3363ce9105344e25306fb1159f5f0bcd954eaeb4872a55a29a6234f11811ba.scope - libcontainer container 1f3363ce9105344e25306fb1159f5f0bcd954eaeb4872a55a29a6234f11811ba. Nov 4 12:22:41.794009 systemd[1]: Started cri-containerd-44d7f76e84ef325ec54221cae4ad95d3c97620cca4212d7eb09bae2fdf87c2bf.scope - libcontainer container 44d7f76e84ef325ec54221cae4ad95d3c97620cca4212d7eb09bae2fdf87c2bf. Nov 4 12:22:41.799404 systemd[1]: Started cri-containerd-8040a08f78b9d287d28af43d490f786ddef297480ba47504dd48ffe98a58755e.scope - libcontainer container 8040a08f78b9d287d28af43d490f786ddef297480ba47504dd48ffe98a58755e. Nov 4 12:22:41.805848 kubelet[2358]: E1104 12:22:41.805817 2358 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.89:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Nov 4 12:22:41.819060 kubelet[2358]: E1104 12:22:41.819031 2358 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.89:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.89:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Nov 4 12:22:41.834185 containerd[1589]: time="2025-11-04T12:22:41.833979290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:72ae43bf624d285361487631af8a6ba6,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f3363ce9105344e25306fb1159f5f0bcd954eaeb4872a55a29a6234f11811ba\"" Nov 4 12:22:41.836103 kubelet[2358]: E1104 12:22:41.836048 2358 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:41.836905 containerd[1589]: time="2025-11-04T12:22:41.836838090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:ce161b3b11c90b0b844f2e4f86b4e8cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"44d7f76e84ef325ec54221cae4ad95d3c97620cca4212d7eb09bae2fdf87c2bf\"" Nov 4 12:22:41.837719 kubelet[2358]: E1104 12:22:41.837688 2358 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:41.840127 containerd[1589]: time="2025-11-04T12:22:41.840041850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d96001f3ec277ac20e6981e641b6135d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8040a08f78b9d287d28af43d490f786ddef297480ba47504dd48ffe98a58755e\"" Nov 4 12:22:41.840594 kubelet[2358]: E1104 12:22:41.840577 2358 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:41.841932 containerd[1589]: time="2025-11-04T12:22:41.841900370Z" level=info msg="CreateContainer within sandbox \"1f3363ce9105344e25306fb1159f5f0bcd954eaeb4872a55a29a6234f11811ba\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 4 12:22:41.842507 containerd[1589]: time="2025-11-04T12:22:41.842238330Z" level=info msg="CreateContainer within sandbox \"44d7f76e84ef325ec54221cae4ad95d3c97620cca4212d7eb09bae2fdf87c2bf\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 4 12:22:41.845368 containerd[1589]: time="2025-11-04T12:22:41.845334970Z" level=info msg="CreateContainer within sandbox \"8040a08f78b9d287d28af43d490f786ddef297480ba47504dd48ffe98a58755e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 4 12:22:41.852175 containerd[1589]: time="2025-11-04T12:22:41.852073330Z" level=info msg="Container 69b5cc392e7e3c205575b17e10874c2f2f69d4bf1cce3bd4f95a3540abf12375: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:22:41.854321 containerd[1589]: time="2025-11-04T12:22:41.854099730Z" level=info msg="Container 8611a4f86fdac67ccd35fa4ba6a732b853671c2230c955e9031e255df60b3a54: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:22:41.855254 containerd[1589]: time="2025-11-04T12:22:41.855226690Z" level=info msg="Container db45c1969ec2ccb53032f6832a2f06bb207206e49282d06a92740817de5d7b98: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:22:41.858323 containerd[1589]: time="2025-11-04T12:22:41.858261810Z" level=info msg="CreateContainer within sandbox \"44d7f76e84ef325ec54221cae4ad95d3c97620cca4212d7eb09bae2fdf87c2bf\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"69b5cc392e7e3c205575b17e10874c2f2f69d4bf1cce3bd4f95a3540abf12375\"" Nov 4 12:22:41.859042 containerd[1589]: time="2025-11-04T12:22:41.859005410Z" level=info msg="StartContainer for \"69b5cc392e7e3c205575b17e10874c2f2f69d4bf1cce3bd4f95a3540abf12375\"" Nov 4 12:22:41.860115 containerd[1589]: time="2025-11-04T12:22:41.860089850Z" level=info msg="connecting to shim 69b5cc392e7e3c205575b17e10874c2f2f69d4bf1cce3bd4f95a3540abf12375" address="unix:///run/containerd/s/db98913d0c37dfc63f2747242a6632a1a305ac8a7c8e919550741005c22786f9" protocol=ttrpc version=3 Nov 4 12:22:41.861397 containerd[1589]: time="2025-11-04T12:22:41.861366690Z" level=info msg="CreateContainer within sandbox \"1f3363ce9105344e25306fb1159f5f0bcd954eaeb4872a55a29a6234f11811ba\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8611a4f86fdac67ccd35fa4ba6a732b853671c2230c955e9031e255df60b3a54\"" Nov 4 12:22:41.862932 containerd[1589]: time="2025-11-04T12:22:41.861809530Z" level=info msg="StartContainer for \"8611a4f86fdac67ccd35fa4ba6a732b853671c2230c955e9031e255df60b3a54\"" Nov 4 12:22:41.862932 containerd[1589]: time="2025-11-04T12:22:41.862790210Z" level=info msg="connecting to shim 8611a4f86fdac67ccd35fa4ba6a732b853671c2230c955e9031e255df60b3a54" address="unix:///run/containerd/s/7eef44759ddecbb2c774e72dc4265e5d1b2aba24ec91ff6192c300cbd6250fb6" protocol=ttrpc version=3 Nov 4 12:22:41.863287 containerd[1589]: time="2025-11-04T12:22:41.863243970Z" level=info msg="CreateContainer within sandbox \"8040a08f78b9d287d28af43d490f786ddef297480ba47504dd48ffe98a58755e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"db45c1969ec2ccb53032f6832a2f06bb207206e49282d06a92740817de5d7b98\"" Nov 4 12:22:41.863632 containerd[1589]: time="2025-11-04T12:22:41.863605730Z" level=info msg="StartContainer for \"db45c1969ec2ccb53032f6832a2f06bb207206e49282d06a92740817de5d7b98\"" Nov 4 12:22:41.865378 containerd[1589]: time="2025-11-04T12:22:41.865343850Z" level=info msg="connecting to shim db45c1969ec2ccb53032f6832a2f06bb207206e49282d06a92740817de5d7b98" address="unix:///run/containerd/s/cb304d5e5737f9669ec8aed27117631b5b6b1bc1e9c8dcf949f21cd2bb90b588" protocol=ttrpc version=3 Nov 4 12:22:41.879477 systemd[1]: Started cri-containerd-69b5cc392e7e3c205575b17e10874c2f2f69d4bf1cce3bd4f95a3540abf12375.scope - libcontainer container 69b5cc392e7e3c205575b17e10874c2f2f69d4bf1cce3bd4f95a3540abf12375. Nov 4 12:22:41.882697 systemd[1]: Started cri-containerd-db45c1969ec2ccb53032f6832a2f06bb207206e49282d06a92740817de5d7b98.scope - libcontainer container db45c1969ec2ccb53032f6832a2f06bb207206e49282d06a92740817de5d7b98. Nov 4 12:22:41.889438 systemd[1]: Started cri-containerd-8611a4f86fdac67ccd35fa4ba6a732b853671c2230c955e9031e255df60b3a54.scope - libcontainer container 8611a4f86fdac67ccd35fa4ba6a732b853671c2230c955e9031e255df60b3a54. Nov 4 12:22:41.926318 containerd[1589]: time="2025-11-04T12:22:41.925268770Z" level=info msg="StartContainer for \"db45c1969ec2ccb53032f6832a2f06bb207206e49282d06a92740817de5d7b98\" returns successfully" Nov 4 12:22:41.937337 containerd[1589]: time="2025-11-04T12:22:41.936835970Z" level=info msg="StartContainer for \"8611a4f86fdac67ccd35fa4ba6a732b853671c2230c955e9031e255df60b3a54\" returns successfully" Nov 4 12:22:41.937525 containerd[1589]: time="2025-11-04T12:22:41.937499050Z" level=info msg="StartContainer for \"69b5cc392e7e3c205575b17e10874c2f2f69d4bf1cce3bd4f95a3540abf12375\" returns successfully" Nov 4 12:22:42.376856 kubelet[2358]: I1104 12:22:42.376823 2358 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 12:22:42.829230 kubelet[2358]: E1104 12:22:42.829184 2358 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:22:42.829417 kubelet[2358]: E1104 12:22:42.829397 2358 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:42.831437 kubelet[2358]: E1104 12:22:42.831412 2358 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:22:42.831542 kubelet[2358]: E1104 12:22:42.831523 2358 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:42.835582 kubelet[2358]: E1104 12:22:42.835561 2358 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:22:42.835680 kubelet[2358]: E1104 12:22:42.835662 2358 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:43.821957 kubelet[2358]: E1104 12:22:43.821917 2358 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 4 12:22:43.837851 kubelet[2358]: E1104 12:22:43.837821 2358 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:22:43.838031 kubelet[2358]: E1104 12:22:43.838010 2358 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:43.839046 kubelet[2358]: E1104 12:22:43.839019 2358 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 4 12:22:43.839152 kubelet[2358]: E1104 12:22:43.839133 2358 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:43.995185 kubelet[2358]: I1104 12:22:43.995130 2358 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 12:22:43.995185 kubelet[2358]: E1104 12:22:43.995181 2358 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 4 12:22:43.999385 kubelet[2358]: I1104 12:22:43.999359 2358 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 12:22:44.005902 kubelet[2358]: E1104 12:22:44.005877 2358 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 4 12:22:44.005902 kubelet[2358]: I1104 12:22:44.005901 2358 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 12:22:44.010510 kubelet[2358]: E1104 12:22:44.010483 2358 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 4 12:22:44.010510 kubelet[2358]: I1104 12:22:44.010511 2358 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 12:22:44.012056 kubelet[2358]: E1104 12:22:44.012032 2358 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 4 12:22:44.790475 kubelet[2358]: I1104 12:22:44.790069 2358 apiserver.go:52] "Watching apiserver" Nov 4 12:22:44.798628 kubelet[2358]: I1104 12:22:44.798597 2358 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 4 12:22:46.125139 systemd[1]: Reload requested from client PID 2649 ('systemctl') (unit session-7.scope)... Nov 4 12:22:46.125157 systemd[1]: Reloading... Nov 4 12:22:46.194313 zram_generator::config[2693]: No configuration found. Nov 4 12:22:46.389583 systemd[1]: Reloading finished in 264 ms. Nov 4 12:22:46.409473 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:22:46.420368 systemd[1]: kubelet.service: Deactivated successfully. Nov 4 12:22:46.421377 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:22:46.421440 systemd[1]: kubelet.service: Consumed 871ms CPU time, 121.9M memory peak. Nov 4 12:22:46.423128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 4 12:22:46.582749 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 4 12:22:46.586888 (kubelet)[2735]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 4 12:22:46.629307 kubelet[2735]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 4 12:22:46.629307 kubelet[2735]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 4 12:22:46.629307 kubelet[2735]: I1104 12:22:46.629140 2735 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 4 12:22:46.640093 kubelet[2735]: I1104 12:22:46.639965 2735 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Nov 4 12:22:46.640093 kubelet[2735]: I1104 12:22:46.639998 2735 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 4 12:22:46.640093 kubelet[2735]: I1104 12:22:46.640030 2735 watchdog_linux.go:95] "Systemd watchdog is not enabled" Nov 4 12:22:46.640093 kubelet[2735]: I1104 12:22:46.640036 2735 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 4 12:22:46.640287 kubelet[2735]: I1104 12:22:46.640258 2735 server.go:956] "Client rotation is on, will bootstrap in background" Nov 4 12:22:46.641689 kubelet[2735]: I1104 12:22:46.641660 2735 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Nov 4 12:22:46.644304 kubelet[2735]: I1104 12:22:46.644269 2735 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 4 12:22:46.647597 kubelet[2735]: I1104 12:22:46.647575 2735 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 4 12:22:46.652892 kubelet[2735]: I1104 12:22:46.652864 2735 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Nov 4 12:22:46.653079 kubelet[2735]: I1104 12:22:46.653052 2735 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 4 12:22:46.653221 kubelet[2735]: I1104 12:22:46.653077 2735 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 4 12:22:46.653315 kubelet[2735]: I1104 12:22:46.653223 2735 topology_manager.go:138] "Creating topology manager with none policy" Nov 4 12:22:46.653315 kubelet[2735]: I1104 12:22:46.653231 2735 container_manager_linux.go:306] "Creating device plugin manager" Nov 4 12:22:46.653315 kubelet[2735]: I1104 12:22:46.653254 2735 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Nov 4 12:22:46.654134 kubelet[2735]: I1104 12:22:46.654118 2735 state_mem.go:36] "Initialized new in-memory state store" Nov 4 12:22:46.654329 kubelet[2735]: I1104 12:22:46.654316 2735 kubelet.go:475] "Attempting to sync node with API server" Nov 4 12:22:46.654364 kubelet[2735]: I1104 12:22:46.654334 2735 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 4 12:22:46.655385 kubelet[2735]: I1104 12:22:46.655366 2735 kubelet.go:387] "Adding apiserver pod source" Nov 4 12:22:46.655385 kubelet[2735]: I1104 12:22:46.655388 2735 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 4 12:22:46.661156 kubelet[2735]: I1104 12:22:46.661131 2735 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 4 12:22:46.661746 kubelet[2735]: I1104 12:22:46.661713 2735 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Nov 4 12:22:46.661792 kubelet[2735]: I1104 12:22:46.661751 2735 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Nov 4 12:22:46.666867 kubelet[2735]: I1104 12:22:46.666841 2735 server.go:1262] "Started kubelet" Nov 4 12:22:46.668280 kubelet[2735]: I1104 12:22:46.667417 2735 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 4 12:22:46.668280 kubelet[2735]: I1104 12:22:46.667483 2735 server_v1.go:49] "podresources" method="list" useActivePods=true Nov 4 12:22:46.668280 kubelet[2735]: I1104 12:22:46.667668 2735 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 4 12:22:46.668280 kubelet[2735]: I1104 12:22:46.667716 2735 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Nov 4 12:22:46.669761 kubelet[2735]: I1104 12:22:46.669725 2735 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 4 12:22:46.671073 kubelet[2735]: I1104 12:22:46.667722 2735 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 4 12:22:46.671202 kubelet[2735]: I1104 12:22:46.671183 2735 server.go:310] "Adding debug handlers to kubelet server" Nov 4 12:22:46.676316 kubelet[2735]: I1104 12:22:46.675204 2735 volume_manager.go:313] "Starting Kubelet Volume Manager" Nov 4 12:22:46.676316 kubelet[2735]: E1104 12:22:46.676199 2735 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 4 12:22:46.676420 kubelet[2735]: I1104 12:22:46.676352 2735 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Nov 4 12:22:46.676681 kubelet[2735]: I1104 12:22:46.676656 2735 reconciler.go:29] "Reconciler: start to sync state" Nov 4 12:22:46.677326 kubelet[2735]: I1104 12:22:46.677294 2735 factory.go:223] Registration of the systemd container factory successfully Nov 4 12:22:46.678242 kubelet[2735]: I1104 12:22:46.677391 2735 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 4 12:22:46.678242 kubelet[2735]: E1104 12:22:46.677802 2735 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 4 12:22:46.682578 kubelet[2735]: I1104 12:22:46.682540 2735 factory.go:223] Registration of the containerd container factory successfully Nov 4 12:22:46.695970 kubelet[2735]: I1104 12:22:46.695869 2735 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Nov 4 12:22:46.698338 kubelet[2735]: I1104 12:22:46.698150 2735 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Nov 4 12:22:46.698338 kubelet[2735]: I1104 12:22:46.698176 2735 status_manager.go:244] "Starting to sync pod status with apiserver" Nov 4 12:22:46.698338 kubelet[2735]: I1104 12:22:46.698207 2735 kubelet.go:2427] "Starting kubelet main sync loop" Nov 4 12:22:46.698338 kubelet[2735]: E1104 12:22:46.698250 2735 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 4 12:22:46.721592 kubelet[2735]: I1104 12:22:46.721324 2735 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 4 12:22:46.721592 kubelet[2735]: I1104 12:22:46.721344 2735 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 4 12:22:46.721592 kubelet[2735]: I1104 12:22:46.721364 2735 state_mem.go:36] "Initialized new in-memory state store" Nov 4 12:22:46.722448 kubelet[2735]: I1104 12:22:46.721487 2735 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 4 12:22:46.722448 kubelet[2735]: I1104 12:22:46.722321 2735 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 4 12:22:46.722448 kubelet[2735]: I1104 12:22:46.722351 2735 policy_none.go:49] "None policy: Start" Nov 4 12:22:46.722448 kubelet[2735]: I1104 12:22:46.722361 2735 memory_manager.go:187] "Starting memorymanager" policy="None" Nov 4 12:22:46.722448 kubelet[2735]: I1104 12:22:46.722374 2735 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Nov 4 12:22:46.722624 kubelet[2735]: I1104 12:22:46.722494 2735 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Nov 4 12:22:46.722624 kubelet[2735]: I1104 12:22:46.722504 2735 policy_none.go:47] "Start" Nov 4 12:22:46.726087 kubelet[2735]: E1104 12:22:46.726061 2735 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Nov 4 12:22:46.726240 kubelet[2735]: I1104 12:22:46.726223 2735 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 4 12:22:46.726303 kubelet[2735]: I1104 12:22:46.726242 2735 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 4 12:22:46.726924 kubelet[2735]: I1104 12:22:46.726886 2735 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 4 12:22:46.728178 kubelet[2735]: E1104 12:22:46.728137 2735 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 4 12:22:46.799472 kubelet[2735]: I1104 12:22:46.799433 2735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 12:22:46.799695 kubelet[2735]: I1104 12:22:46.799577 2735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 4 12:22:46.799695 kubelet[2735]: I1104 12:22:46.799590 2735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 4 12:22:46.830249 kubelet[2735]: I1104 12:22:46.830222 2735 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 4 12:22:46.835831 kubelet[2735]: I1104 12:22:46.835799 2735 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 4 12:22:46.835938 kubelet[2735]: I1104 12:22:46.835875 2735 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 4 12:22:46.979358 kubelet[2735]: I1104 12:22:46.977834 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d96001f3ec277ac20e6981e641b6135d-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d96001f3ec277ac20e6981e641b6135d\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:22:46.979358 kubelet[2735]: I1104 12:22:46.977923 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:22:46.979358 kubelet[2735]: I1104 12:22:46.977991 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:22:46.979358 kubelet[2735]: I1104 12:22:46.978025 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:22:46.979358 kubelet[2735]: I1104 12:22:46.978044 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:22:46.979561 kubelet[2735]: I1104 12:22:46.978063 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d96001f3ec277ac20e6981e641b6135d-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d96001f3ec277ac20e6981e641b6135d\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:22:46.979561 kubelet[2735]: I1104 12:22:46.978080 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d96001f3ec277ac20e6981e641b6135d-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d96001f3ec277ac20e6981e641b6135d\") " pod="kube-system/kube-apiserver-localhost" Nov 4 12:22:46.979561 kubelet[2735]: I1104 12:22:46.978093 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ce161b3b11c90b0b844f2e4f86b4e8cd-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"ce161b3b11c90b0b844f2e4f86b4e8cd\") " pod="kube-system/kube-controller-manager-localhost" Nov 4 12:22:46.979561 kubelet[2735]: I1104 12:22:46.978111 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/72ae43bf624d285361487631af8a6ba6-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"72ae43bf624d285361487631af8a6ba6\") " pod="kube-system/kube-scheduler-localhost" Nov 4 12:22:47.106177 kubelet[2735]: E1104 12:22:47.106119 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:47.106360 kubelet[2735]: E1104 12:22:47.106200 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:47.106360 kubelet[2735]: E1104 12:22:47.106241 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:47.128572 sudo[2777]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 4 12:22:47.128831 sudo[2777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 4 12:22:47.448756 sudo[2777]: pam_unix(sudo:session): session closed for user root Nov 4 12:22:47.656558 kubelet[2735]: I1104 12:22:47.656511 2735 apiserver.go:52] "Watching apiserver" Nov 4 12:22:47.677090 kubelet[2735]: I1104 12:22:47.677063 2735 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Nov 4 12:22:47.712396 kubelet[2735]: I1104 12:22:47.711802 2735 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 4 12:22:47.712808 kubelet[2735]: E1104 12:22:47.712689 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:47.713288 kubelet[2735]: E1104 12:22:47.713034 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:47.720078 kubelet[2735]: E1104 12:22:47.719953 2735 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 4 12:22:47.720430 kubelet[2735]: E1104 12:22:47.720358 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:47.757372 kubelet[2735]: I1104 12:22:47.757162 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.75714401 podStartE2EDuration="1.75714401s" podCreationTimestamp="2025-11-04 12:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:22:47.75580961 +0000 UTC m=+1.166101041" watchObservedRunningTime="2025-11-04 12:22:47.75714401 +0000 UTC m=+1.167435281" Nov 4 12:22:47.773235 kubelet[2735]: I1104 12:22:47.773181 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.77316629 podStartE2EDuration="1.77316629s" podCreationTimestamp="2025-11-04 12:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:22:47.76617325 +0000 UTC m=+1.176464601" watchObservedRunningTime="2025-11-04 12:22:47.77316629 +0000 UTC m=+1.183457601" Nov 4 12:22:47.782302 kubelet[2735]: I1104 12:22:47.782130 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.78212005 podStartE2EDuration="1.78212005s" podCreationTimestamp="2025-11-04 12:22:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:22:47.77408989 +0000 UTC m=+1.184381201" watchObservedRunningTime="2025-11-04 12:22:47.78212005 +0000 UTC m=+1.192411321" Nov 4 12:22:48.714300 kubelet[2735]: E1104 12:22:48.714037 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:48.714793 kubelet[2735]: E1104 12:22:48.714773 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:48.714917 kubelet[2735]: E1104 12:22:48.714880 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:49.361122 sudo[1803]: pam_unix(sudo:session): session closed for user root Nov 4 12:22:49.362853 sshd[1802]: Connection closed by 10.0.0.1 port 42438 Nov 4 12:22:49.363291 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Nov 4 12:22:49.366995 systemd[1]: sshd@6-10.0.0.89:22-10.0.0.1:42438.service: Deactivated successfully. Nov 4 12:22:49.368773 systemd[1]: session-7.scope: Deactivated successfully. Nov 4 12:22:49.368932 systemd[1]: session-7.scope: Consumed 8.587s CPU time, 250.9M memory peak. Nov 4 12:22:49.369888 systemd-logind[1570]: Session 7 logged out. Waiting for processes to exit. Nov 4 12:22:49.370867 systemd-logind[1570]: Removed session 7. Nov 4 12:22:49.715483 kubelet[2735]: E1104 12:22:49.715370 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:52.078989 kubelet[2735]: I1104 12:22:52.078917 2735 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 4 12:22:52.079748 containerd[1589]: time="2025-11-04T12:22:52.079708826Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 4 12:22:52.080371 kubelet[2735]: I1104 12:22:52.080199 2735 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 4 12:22:52.820603 systemd[1]: Created slice kubepods-besteffort-pod2af5762f_23a2_4bbb_b10e_30549fdd4530.slice - libcontainer container kubepods-besteffort-pod2af5762f_23a2_4bbb_b10e_30549fdd4530.slice. Nov 4 12:22:52.824091 kubelet[2735]: I1104 12:22:52.823907 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2af5762f-23a2-4bbb-b10e-30549fdd4530-lib-modules\") pod \"kube-proxy-4prcr\" (UID: \"2af5762f-23a2-4bbb-b10e-30549fdd4530\") " pod="kube-system/kube-proxy-4prcr" Nov 4 12:22:52.824091 kubelet[2735]: I1104 12:22:52.823942 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2af5762f-23a2-4bbb-b10e-30549fdd4530-kube-proxy\") pod \"kube-proxy-4prcr\" (UID: \"2af5762f-23a2-4bbb-b10e-30549fdd4530\") " pod="kube-system/kube-proxy-4prcr" Nov 4 12:22:52.824091 kubelet[2735]: I1104 12:22:52.823963 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lt6ng\" (UniqueName: \"kubernetes.io/projected/2af5762f-23a2-4bbb-b10e-30549fdd4530-kube-api-access-lt6ng\") pod \"kube-proxy-4prcr\" (UID: \"2af5762f-23a2-4bbb-b10e-30549fdd4530\") " pod="kube-system/kube-proxy-4prcr" Nov 4 12:22:52.824091 kubelet[2735]: I1104 12:22:52.824031 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2af5762f-23a2-4bbb-b10e-30549fdd4530-xtables-lock\") pod \"kube-proxy-4prcr\" (UID: \"2af5762f-23a2-4bbb-b10e-30549fdd4530\") " pod="kube-system/kube-proxy-4prcr" Nov 4 12:22:52.842500 systemd[1]: Created slice kubepods-burstable-pod30dad178_8cfb_42e8_9abf_e5daba536063.slice - libcontainer container kubepods-burstable-pod30dad178_8cfb_42e8_9abf_e5daba536063.slice. Nov 4 12:22:52.925070 kubelet[2735]: I1104 12:22:52.925019 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-cni-path\") pod \"cilium-nvmkz\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " pod="kube-system/cilium-nvmkz" Nov 4 12:22:52.925070 kubelet[2735]: I1104 12:22:52.925059 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/30dad178-8cfb-42e8-9abf-e5daba536063-hubble-tls\") pod \"cilium-nvmkz\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " pod="kube-system/cilium-nvmkz" Nov 4 12:22:52.925218 kubelet[2735]: I1104 12:22:52.925100 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-cilium-run\") pod \"cilium-nvmkz\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " pod="kube-system/cilium-nvmkz" Nov 4 12:22:52.925218 kubelet[2735]: I1104 12:22:52.925132 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30dad178-8cfb-42e8-9abf-e5daba536063-cilium-config-path\") pod \"cilium-nvmkz\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " pod="kube-system/cilium-nvmkz" Nov 4 12:22:52.925218 kubelet[2735]: I1104 12:22:52.925187 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-host-proc-sys-kernel\") pod \"cilium-nvmkz\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " pod="kube-system/cilium-nvmkz" Nov 4 12:22:52.925357 kubelet[2735]: I1104 12:22:52.925337 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-bpf-maps\") pod \"cilium-nvmkz\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " pod="kube-system/cilium-nvmkz" Nov 4 12:22:52.925388 kubelet[2735]: I1104 12:22:52.925366 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-etc-cni-netd\") pod \"cilium-nvmkz\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " pod="kube-system/cilium-nvmkz" Nov 4 12:22:52.925409 kubelet[2735]: I1104 12:22:52.925392 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-lib-modules\") pod \"cilium-nvmkz\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " pod="kube-system/cilium-nvmkz" Nov 4 12:22:52.925434 kubelet[2735]: I1104 12:22:52.925426 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-xtables-lock\") pod \"cilium-nvmkz\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " pod="kube-system/cilium-nvmkz" Nov 4 12:22:52.925456 kubelet[2735]: I1104 12:22:52.925440 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/30dad178-8cfb-42e8-9abf-e5daba536063-clustermesh-secrets\") pod \"cilium-nvmkz\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " pod="kube-system/cilium-nvmkz" Nov 4 12:22:52.925478 kubelet[2735]: I1104 12:22:52.925466 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-host-proc-sys-net\") pod \"cilium-nvmkz\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " pod="kube-system/cilium-nvmkz" Nov 4 12:22:52.925500 kubelet[2735]: I1104 12:22:52.925481 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cld75\" (UniqueName: \"kubernetes.io/projected/30dad178-8cfb-42e8-9abf-e5daba536063-kube-api-access-cld75\") pod \"cilium-nvmkz\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " pod="kube-system/cilium-nvmkz" Nov 4 12:22:52.925522 kubelet[2735]: I1104 12:22:52.925506 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-hostproc\") pod \"cilium-nvmkz\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " pod="kube-system/cilium-nvmkz" Nov 4 12:22:52.925563 kubelet[2735]: I1104 12:22:52.925545 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-cilium-cgroup\") pod \"cilium-nvmkz\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " pod="kube-system/cilium-nvmkz" Nov 4 12:22:52.932401 kubelet[2735]: E1104 12:22:52.932376 2735 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 4 12:22:52.932678 kubelet[2735]: E1104 12:22:52.932480 2735 projected.go:196] Error preparing data for projected volume kube-api-access-lt6ng for pod kube-system/kube-proxy-4prcr: configmap "kube-root-ca.crt" not found Nov 4 12:22:52.932678 kubelet[2735]: E1104 12:22:52.932553 2735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/2af5762f-23a2-4bbb-b10e-30549fdd4530-kube-api-access-lt6ng podName:2af5762f-23a2-4bbb-b10e-30549fdd4530 nodeName:}" failed. No retries permitted until 2025-11-04 12:22:53.432531067 +0000 UTC m=+6.842822378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lt6ng" (UniqueName: "kubernetes.io/projected/2af5762f-23a2-4bbb-b10e-30549fdd4530-kube-api-access-lt6ng") pod "kube-proxy-4prcr" (UID: "2af5762f-23a2-4bbb-b10e-30549fdd4530") : configmap "kube-root-ca.crt" not found Nov 4 12:22:53.035478 kubelet[2735]: E1104 12:22:53.035439 2735 projected.go:291] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 4 12:22:53.035478 kubelet[2735]: E1104 12:22:53.035467 2735 projected.go:196] Error preparing data for projected volume kube-api-access-cld75 for pod kube-system/cilium-nvmkz: configmap "kube-root-ca.crt" not found Nov 4 12:22:53.035623 kubelet[2735]: E1104 12:22:53.035519 2735 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/30dad178-8cfb-42e8-9abf-e5daba536063-kube-api-access-cld75 podName:30dad178-8cfb-42e8-9abf-e5daba536063 nodeName:}" failed. No retries permitted until 2025-11-04 12:22:53.535501602 +0000 UTC m=+6.945792913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cld75" (UniqueName: "kubernetes.io/projected/30dad178-8cfb-42e8-9abf-e5daba536063-kube-api-access-cld75") pod "cilium-nvmkz" (UID: "30dad178-8cfb-42e8-9abf-e5daba536063") : configmap "kube-root-ca.crt" not found Nov 4 12:22:53.143534 kubelet[2735]: E1104 12:22:53.143429 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:53.224126 systemd[1]: Created slice kubepods-besteffort-podc912f37b_377f_476e_8fc9_86ebf83bccb9.slice - libcontainer container kubepods-besteffort-podc912f37b_377f_476e_8fc9_86ebf83bccb9.slice. Nov 4 12:22:53.228292 kubelet[2735]: I1104 12:22:53.228102 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49984\" (UniqueName: \"kubernetes.io/projected/c912f37b-377f-476e-8fc9-86ebf83bccb9-kube-api-access-49984\") pod \"cilium-operator-6f9c7c5859-5bndp\" (UID: \"c912f37b-377f-476e-8fc9-86ebf83bccb9\") " pod="kube-system/cilium-operator-6f9c7c5859-5bndp" Nov 4 12:22:53.228292 kubelet[2735]: I1104 12:22:53.228142 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c912f37b-377f-476e-8fc9-86ebf83bccb9-cilium-config-path\") pod \"cilium-operator-6f9c7c5859-5bndp\" (UID: \"c912f37b-377f-476e-8fc9-86ebf83bccb9\") " pod="kube-system/cilium-operator-6f9c7c5859-5bndp" Nov 4 12:22:53.534376 kubelet[2735]: E1104 12:22:53.534332 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:53.535391 containerd[1589]: time="2025-11-04T12:22:53.535353744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-5bndp,Uid:c912f37b-377f-476e-8fc9-86ebf83bccb9,Namespace:kube-system,Attempt:0,}" Nov 4 12:22:53.555617 containerd[1589]: time="2025-11-04T12:22:53.554967857Z" level=info msg="connecting to shim 3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca" address="unix:///run/containerd/s/e689d050528f0e4134823985bc9899ea107ebdc945a47c4fff4be553b5eadae8" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:22:53.575467 systemd[1]: Started cri-containerd-3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca.scope - libcontainer container 3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca. Nov 4 12:22:53.619633 containerd[1589]: time="2025-11-04T12:22:53.619594970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6f9c7c5859-5bndp,Uid:c912f37b-377f-476e-8fc9-86ebf83bccb9,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca\"" Nov 4 12:22:53.620421 kubelet[2735]: E1104 12:22:53.620397 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:53.621409 containerd[1589]: time="2025-11-04T12:22:53.621327507Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 4 12:22:53.722389 kubelet[2735]: E1104 12:22:53.722190 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:53.736534 kubelet[2735]: E1104 12:22:53.735822 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:53.736685 containerd[1589]: time="2025-11-04T12:22:53.736258714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4prcr,Uid:2af5762f-23a2-4bbb-b10e-30549fdd4530,Namespace:kube-system,Attempt:0,}" Nov 4 12:22:53.747386 kubelet[2735]: E1104 12:22:53.747358 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:53.747967 containerd[1589]: time="2025-11-04T12:22:53.747930709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nvmkz,Uid:30dad178-8cfb-42e8-9abf-e5daba536063,Namespace:kube-system,Attempt:0,}" Nov 4 12:22:53.751887 containerd[1589]: time="2025-11-04T12:22:53.751840547Z" level=info msg="connecting to shim 3c2887759f03b0656a73986d150ba8490237f2b824f0622e3cf75e3c294af4d4" address="unix:///run/containerd/s/8b2a2509c3bd3d8d1ea5ace4439ab5a684952ce168dcd8ccd585ff91a4981595" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:22:53.768539 containerd[1589]: time="2025-11-04T12:22:53.768494671Z" level=info msg="connecting to shim 05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa" address="unix:///run/containerd/s/3131d0121f9c37af0d3e26f6b662b58b410324ffd243e0e52a44dd70f69bed61" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:22:53.778456 systemd[1]: Started cri-containerd-3c2887759f03b0656a73986d150ba8490237f2b824f0622e3cf75e3c294af4d4.scope - libcontainer container 3c2887759f03b0656a73986d150ba8490237f2b824f0622e3cf75e3c294af4d4. Nov 4 12:22:53.797466 systemd[1]: Started cri-containerd-05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa.scope - libcontainer container 05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa. Nov 4 12:22:53.821924 containerd[1589]: time="2025-11-04T12:22:53.821867954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-4prcr,Uid:2af5762f-23a2-4bbb-b10e-30549fdd4530,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c2887759f03b0656a73986d150ba8490237f2b824f0622e3cf75e3c294af4d4\"" Nov 4 12:22:53.822808 kubelet[2735]: E1104 12:22:53.822785 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:53.825323 containerd[1589]: time="2025-11-04T12:22:53.824771463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nvmkz,Uid:30dad178-8cfb-42e8-9abf-e5daba536063,Namespace:kube-system,Attempt:0,} returns sandbox id \"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\"" Nov 4 12:22:53.825641 kubelet[2735]: E1104 12:22:53.825615 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:53.829362 containerd[1589]: time="2025-11-04T12:22:53.829330387Z" level=info msg="CreateContainer within sandbox \"3c2887759f03b0656a73986d150ba8490237f2b824f0622e3cf75e3c294af4d4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 4 12:22:53.841039 containerd[1589]: time="2025-11-04T12:22:53.840998222Z" level=info msg="Container d78755a9c48233968110bd5dc1f446f885019fcd843e0b0d52168dfa9340ca25: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:22:53.847791 containerd[1589]: time="2025-11-04T12:22:53.847740888Z" level=info msg="CreateContainer within sandbox \"3c2887759f03b0656a73986d150ba8490237f2b824f0622e3cf75e3c294af4d4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d78755a9c48233968110bd5dc1f446f885019fcd843e0b0d52168dfa9340ca25\"" Nov 4 12:22:53.848257 containerd[1589]: time="2025-11-04T12:22:53.848236173Z" level=info msg="StartContainer for \"d78755a9c48233968110bd5dc1f446f885019fcd843e0b0d52168dfa9340ca25\"" Nov 4 12:22:53.853532 containerd[1589]: time="2025-11-04T12:22:53.853496064Z" level=info msg="connecting to shim d78755a9c48233968110bd5dc1f446f885019fcd843e0b0d52168dfa9340ca25" address="unix:///run/containerd/s/8b2a2509c3bd3d8d1ea5ace4439ab5a684952ce168dcd8ccd585ff91a4981595" protocol=ttrpc version=3 Nov 4 12:22:53.878451 systemd[1]: Started cri-containerd-d78755a9c48233968110bd5dc1f446f885019fcd843e0b0d52168dfa9340ca25.scope - libcontainer container d78755a9c48233968110bd5dc1f446f885019fcd843e0b0d52168dfa9340ca25. Nov 4 12:22:53.913256 containerd[1589]: time="2025-11-04T12:22:53.913218050Z" level=info msg="StartContainer for \"d78755a9c48233968110bd5dc1f446f885019fcd843e0b0d52168dfa9340ca25\" returns successfully" Nov 4 12:22:54.698294 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2528531553.mount: Deactivated successfully. Nov 4 12:22:54.743731 kubelet[2735]: E1104 12:22:54.743624 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:54.743731 kubelet[2735]: E1104 12:22:54.743669 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:55.161216 containerd[1589]: time="2025-11-04T12:22:55.161172806Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:55.162090 containerd[1589]: time="2025-11-04T12:22:55.161617050Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Nov 4 12:22:55.167600 containerd[1589]: time="2025-11-04T12:22:55.167564181Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:22:55.169707 containerd[1589]: time="2025-11-04T12:22:55.169657239Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.548290451s" Nov 4 12:22:55.169707 containerd[1589]: time="2025-11-04T12:22:55.169704959Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 4 12:22:55.172855 containerd[1589]: time="2025-11-04T12:22:55.172591744Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 4 12:22:55.175494 containerd[1589]: time="2025-11-04T12:22:55.174791923Z" level=info msg="CreateContainer within sandbox \"3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 4 12:22:55.182858 containerd[1589]: time="2025-11-04T12:22:55.182789552Z" level=info msg="Container 0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:22:55.186391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1524825217.mount: Deactivated successfully. Nov 4 12:22:55.188657 containerd[1589]: time="2025-11-04T12:22:55.188608042Z" level=info msg="CreateContainer within sandbox \"3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\"" Nov 4 12:22:55.189039 containerd[1589]: time="2025-11-04T12:22:55.189014846Z" level=info msg="StartContainer for \"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\"" Nov 4 12:22:55.190262 containerd[1589]: time="2025-11-04T12:22:55.190236816Z" level=info msg="connecting to shim 0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64" address="unix:///run/containerd/s/e689d050528f0e4134823985bc9899ea107ebdc945a47c4fff4be553b5eadae8" protocol=ttrpc version=3 Nov 4 12:22:55.230464 systemd[1]: Started cri-containerd-0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64.scope - libcontainer container 0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64. Nov 4 12:22:55.255493 containerd[1589]: time="2025-11-04T12:22:55.255450699Z" level=info msg="StartContainer for \"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\" returns successfully" Nov 4 12:22:55.748365 kubelet[2735]: E1104 12:22:55.748334 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:55.761812 kubelet[2735]: I1104 12:22:55.761728 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-4prcr" podStartSLOduration=3.761712342 podStartE2EDuration="3.761712342s" podCreationTimestamp="2025-11-04 12:22:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:22:54.753420389 +0000 UTC m=+8.163711700" watchObservedRunningTime="2025-11-04 12:22:55.761712342 +0000 UTC m=+9.172003653" Nov 4 12:22:56.750293 kubelet[2735]: E1104 12:22:56.750213 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:57.813102 kubelet[2735]: E1104 12:22:57.813059 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:57.827307 kubelet[2735]: I1104 12:22:57.826983 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6f9c7c5859-5bndp" podStartSLOduration=3.276328311 podStartE2EDuration="4.826965543s" podCreationTimestamp="2025-11-04 12:22:53 +0000 UTC" firstStartedPulling="2025-11-04 12:22:53.621023184 +0000 UTC m=+7.031314455" lastFinishedPulling="2025-11-04 12:22:55.171660376 +0000 UTC m=+8.581951687" observedRunningTime="2025-11-04 12:22:55.761674382 +0000 UTC m=+9.171965693" watchObservedRunningTime="2025-11-04 12:22:57.826965543 +0000 UTC m=+11.237256854" Nov 4 12:22:57.933911 kubelet[2735]: E1104 12:22:57.933601 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:58.757573 kubelet[2735]: E1104 12:22:58.757540 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:22:59.521657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3310885194.mount: Deactivated successfully. Nov 4 12:23:01.346227 update_engine[1573]: I20251104 12:23:01.346160 1573 update_attempter.cc:509] Updating boot flags... Nov 4 12:23:02.790535 containerd[1589]: time="2025-11-04T12:23:02.790491729Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:23:02.791284 containerd[1589]: time="2025-11-04T12:23:02.791240213Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Nov 4 12:23:02.791983 containerd[1589]: time="2025-11-04T12:23:02.791947577Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 4 12:23:02.793817 containerd[1589]: time="2025-11-04T12:23:02.793786747Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.620677598s" Nov 4 12:23:02.793871 containerd[1589]: time="2025-11-04T12:23:02.793822267Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 4 12:23:02.798746 containerd[1589]: time="2025-11-04T12:23:02.798716054Z" level=info msg="CreateContainer within sandbox \"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 12:23:02.805310 containerd[1589]: time="2025-11-04T12:23:02.804819807Z" level=info msg="Container 8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:23:02.808044 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount437166808.mount: Deactivated successfully. Nov 4 12:23:02.810728 containerd[1589]: time="2025-11-04T12:23:02.810685399Z" level=info msg="CreateContainer within sandbox \"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92\"" Nov 4 12:23:02.812001 containerd[1589]: time="2025-11-04T12:23:02.811974526Z" level=info msg="StartContainer for \"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92\"" Nov 4 12:23:02.812819 containerd[1589]: time="2025-11-04T12:23:02.812797691Z" level=info msg="connecting to shim 8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92" address="unix:///run/containerd/s/3131d0121f9c37af0d3e26f6b662b58b410324ffd243e0e52a44dd70f69bed61" protocol=ttrpc version=3 Nov 4 12:23:02.838457 systemd[1]: Started cri-containerd-8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92.scope - libcontainer container 8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92. Nov 4 12:23:02.862552 containerd[1589]: time="2025-11-04T12:23:02.862512484Z" level=info msg="StartContainer for \"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92\" returns successfully" Nov 4 12:23:02.873782 systemd[1]: cri-containerd-8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92.scope: Deactivated successfully. Nov 4 12:23:02.891076 containerd[1589]: time="2025-11-04T12:23:02.891036880Z" level=info msg="received exit event container_id:\"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92\" id:\"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92\" pid:3234 exited_at:{seconds:1762258982 nanos:885651531}" Nov 4 12:23:02.891205 containerd[1589]: time="2025-11-04T12:23:02.891146561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92\" id:\"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92\" pid:3234 exited_at:{seconds:1762258982 nanos:885651531}" Nov 4 12:23:02.919695 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92-rootfs.mount: Deactivated successfully. Nov 4 12:23:03.765631 kubelet[2735]: E1104 12:23:03.765556 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:03.770157 containerd[1589]: time="2025-11-04T12:23:03.770113239Z" level=info msg="CreateContainer within sandbox \"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 12:23:03.776665 containerd[1589]: time="2025-11-04T12:23:03.776633353Z" level=info msg="Container 0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:23:03.791586 containerd[1589]: time="2025-11-04T12:23:03.791535150Z" level=info msg="CreateContainer within sandbox \"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195\"" Nov 4 12:23:03.792557 containerd[1589]: time="2025-11-04T12:23:03.792535035Z" level=info msg="StartContainer for \"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195\"" Nov 4 12:23:03.796171 containerd[1589]: time="2025-11-04T12:23:03.796138573Z" level=info msg="connecting to shim 0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195" address="unix:///run/containerd/s/3131d0121f9c37af0d3e26f6b662b58b410324ffd243e0e52a44dd70f69bed61" protocol=ttrpc version=3 Nov 4 12:23:03.836423 systemd[1]: Started cri-containerd-0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195.scope - libcontainer container 0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195. Nov 4 12:23:03.858918 containerd[1589]: time="2025-11-04T12:23:03.858823176Z" level=info msg="StartContainer for \"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195\" returns successfully" Nov 4 12:23:03.871925 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 4 12:23:03.872142 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 4 12:23:03.872201 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 4 12:23:03.874663 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 4 12:23:03.877613 systemd[1]: cri-containerd-0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195.scope: Deactivated successfully. Nov 4 12:23:03.884695 containerd[1589]: time="2025-11-04T12:23:03.884661029Z" level=info msg="received exit event container_id:\"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195\" id:\"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195\" pid:3279 exited_at:{seconds:1762258983 nanos:884493868}" Nov 4 12:23:03.884956 containerd[1589]: time="2025-11-04T12:23:03.884777669Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195\" id:\"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195\" pid:3279 exited_at:{seconds:1762258983 nanos:884493868}" Nov 4 12:23:03.899362 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 4 12:23:03.904669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195-rootfs.mount: Deactivated successfully. Nov 4 12:23:04.770369 kubelet[2735]: E1104 12:23:04.769922 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:04.774469 containerd[1589]: time="2025-11-04T12:23:04.774422677Z" level=info msg="CreateContainer within sandbox \"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 12:23:04.784667 containerd[1589]: time="2025-11-04T12:23:04.784556125Z" level=info msg="Container 8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:23:04.802636 containerd[1589]: time="2025-11-04T12:23:04.802592692Z" level=info msg="CreateContainer within sandbox \"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5\"" Nov 4 12:23:04.803641 containerd[1589]: time="2025-11-04T12:23:04.803613857Z" level=info msg="StartContainer for \"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5\"" Nov 4 12:23:04.805638 containerd[1589]: time="2025-11-04T12:23:04.805022464Z" level=info msg="connecting to shim 8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5" address="unix:///run/containerd/s/3131d0121f9c37af0d3e26f6b662b58b410324ffd243e0e52a44dd70f69bed61" protocol=ttrpc version=3 Nov 4 12:23:04.829425 systemd[1]: Started cri-containerd-8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5.scope - libcontainer container 8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5. Nov 4 12:23:04.870997 systemd[1]: cri-containerd-8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5.scope: Deactivated successfully. Nov 4 12:23:04.871709 containerd[1589]: time="2025-11-04T12:23:04.871674785Z" level=info msg="StartContainer for \"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5\" returns successfully" Nov 4 12:23:04.872719 containerd[1589]: time="2025-11-04T12:23:04.872408309Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5\" id:\"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5\" pid:3326 exited_at:{seconds:1762258984 nanos:872172948}" Nov 4 12:23:04.872719 containerd[1589]: time="2025-11-04T12:23:04.872418749Z" level=info msg="received exit event container_id:\"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5\" id:\"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5\" pid:3326 exited_at:{seconds:1762258984 nanos:872172948}" Nov 4 12:23:04.889739 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5-rootfs.mount: Deactivated successfully. Nov 4 12:23:05.780970 kubelet[2735]: E1104 12:23:05.780939 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:05.786101 containerd[1589]: time="2025-11-04T12:23:05.786064158Z" level=info msg="CreateContainer within sandbox \"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 12:23:05.811190 containerd[1589]: time="2025-11-04T12:23:05.811029551Z" level=info msg="Container 0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:23:05.818422 containerd[1589]: time="2025-11-04T12:23:05.818304384Z" level=info msg="CreateContainer within sandbox \"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69\"" Nov 4 12:23:05.819335 containerd[1589]: time="2025-11-04T12:23:05.819210268Z" level=info msg="StartContainer for \"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69\"" Nov 4 12:23:05.820578 containerd[1589]: time="2025-11-04T12:23:05.820553354Z" level=info msg="connecting to shim 0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69" address="unix:///run/containerd/s/3131d0121f9c37af0d3e26f6b662b58b410324ffd243e0e52a44dd70f69bed61" protocol=ttrpc version=3 Nov 4 12:23:05.850443 systemd[1]: Started cri-containerd-0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69.scope - libcontainer container 0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69. Nov 4 12:23:05.870574 systemd[1]: cri-containerd-0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69.scope: Deactivated successfully. Nov 4 12:23:05.871365 containerd[1589]: time="2025-11-04T12:23:05.871262583Z" level=info msg="received exit event container_id:\"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69\" id:\"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69\" pid:3365 exited_at:{seconds:1762258985 nanos:871013702}" Nov 4 12:23:05.871652 containerd[1589]: time="2025-11-04T12:23:05.871448424Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69\" id:\"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69\" pid:3365 exited_at:{seconds:1762258985 nanos:871013702}" Nov 4 12:23:05.873213 containerd[1589]: time="2025-11-04T12:23:05.873183912Z" level=info msg="StartContainer for \"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69\" returns successfully" Nov 4 12:23:05.888510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69-rootfs.mount: Deactivated successfully. Nov 4 12:23:06.787240 kubelet[2735]: E1104 12:23:06.786324 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:06.792225 containerd[1589]: time="2025-11-04T12:23:06.792182323Z" level=info msg="CreateContainer within sandbox \"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 4 12:23:06.801816 containerd[1589]: time="2025-11-04T12:23:06.800385038Z" level=info msg="Container 053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:23:06.810713 containerd[1589]: time="2025-11-04T12:23:06.810257680Z" level=info msg="CreateContainer within sandbox \"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\"" Nov 4 12:23:06.810969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4072439755.mount: Deactivated successfully. Nov 4 12:23:06.812336 containerd[1589]: time="2025-11-04T12:23:06.812055367Z" level=info msg="StartContainer for \"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\"" Nov 4 12:23:06.813834 containerd[1589]: time="2025-11-04T12:23:06.813780374Z" level=info msg="connecting to shim 053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f" address="unix:///run/containerd/s/3131d0121f9c37af0d3e26f6b662b58b410324ffd243e0e52a44dd70f69bed61" protocol=ttrpc version=3 Nov 4 12:23:06.836438 systemd[1]: Started cri-containerd-053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f.scope - libcontainer container 053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f. Nov 4 12:23:06.863880 containerd[1589]: time="2025-11-04T12:23:06.863847547Z" level=info msg="StartContainer for \"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\" returns successfully" Nov 4 12:23:06.938843 containerd[1589]: time="2025-11-04T12:23:06.938803944Z" level=info msg="TaskExit event in podsandbox handler container_id:\"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\" id:\"dde2e442dcb3f4c36bc4d20c2ada81367fd0d7123525f8bfd23748b31f76c754\" pid:3436 exited_at:{seconds:1762258986 nanos:938361222}" Nov 4 12:23:06.964840 kubelet[2735]: I1104 12:23:06.964811 2735 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Nov 4 12:23:07.004934 systemd[1]: Created slice kubepods-burstable-pod7aae3451_4ac8_4d01_bbef_a97ea9b1db0c.slice - libcontainer container kubepods-burstable-pod7aae3451_4ac8_4d01_bbef_a97ea9b1db0c.slice. Nov 4 12:23:07.017201 systemd[1]: Created slice kubepods-burstable-pod305adcb1_222f_41fb_8769_7cabc8fc2a4f.slice - libcontainer container kubepods-burstable-pod305adcb1_222f_41fb_8769_7cabc8fc2a4f.slice. Nov 4 12:23:07.035354 kubelet[2735]: I1104 12:23:07.035309 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpk6q\" (UniqueName: \"kubernetes.io/projected/7aae3451-4ac8-4d01-bbef-a97ea9b1db0c-kube-api-access-xpk6q\") pod \"coredns-66bc5c9577-8c4fg\" (UID: \"7aae3451-4ac8-4d01-bbef-a97ea9b1db0c\") " pod="kube-system/coredns-66bc5c9577-8c4fg" Nov 4 12:23:07.035470 kubelet[2735]: I1104 12:23:07.035412 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7aae3451-4ac8-4d01-bbef-a97ea9b1db0c-config-volume\") pod \"coredns-66bc5c9577-8c4fg\" (UID: \"7aae3451-4ac8-4d01-bbef-a97ea9b1db0c\") " pod="kube-system/coredns-66bc5c9577-8c4fg" Nov 4 12:23:07.035470 kubelet[2735]: I1104 12:23:07.035434 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/305adcb1-222f-41fb-8769-7cabc8fc2a4f-config-volume\") pod \"coredns-66bc5c9577-6zx9q\" (UID: \"305adcb1-222f-41fb-8769-7cabc8fc2a4f\") " pod="kube-system/coredns-66bc5c9577-6zx9q" Nov 4 12:23:07.035470 kubelet[2735]: I1104 12:23:07.035452 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttvws\" (UniqueName: \"kubernetes.io/projected/305adcb1-222f-41fb-8769-7cabc8fc2a4f-kube-api-access-ttvws\") pod \"coredns-66bc5c9577-6zx9q\" (UID: \"305adcb1-222f-41fb-8769-7cabc8fc2a4f\") " pod="kube-system/coredns-66bc5c9577-6zx9q" Nov 4 12:23:07.311844 kubelet[2735]: E1104 12:23:07.311805 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:07.320140 containerd[1589]: time="2025-11-04T12:23:07.320061236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8c4fg,Uid:7aae3451-4ac8-4d01-bbef-a97ea9b1db0c,Namespace:kube-system,Attempt:0,}" Nov 4 12:23:07.323317 kubelet[2735]: E1104 12:23:07.323208 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:07.323966 containerd[1589]: time="2025-11-04T12:23:07.323935371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6zx9q,Uid:305adcb1-222f-41fb-8769-7cabc8fc2a4f,Namespace:kube-system,Attempt:0,}" Nov 4 12:23:07.795301 kubelet[2735]: E1104 12:23:07.795228 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:07.813795 kubelet[2735]: I1104 12:23:07.813410 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-nvmkz" podStartSLOduration=6.845653208 podStartE2EDuration="15.813394556s" podCreationTimestamp="2025-11-04 12:22:52 +0000 UTC" firstStartedPulling="2025-11-04 12:22:53.826837043 +0000 UTC m=+7.237128354" lastFinishedPulling="2025-11-04 12:23:02.794578391 +0000 UTC m=+16.204869702" observedRunningTime="2025-11-04 12:23:07.811500908 +0000 UTC m=+21.221792219" watchObservedRunningTime="2025-11-04 12:23:07.813394556 +0000 UTC m=+21.223685867" Nov 4 12:23:08.797230 kubelet[2735]: E1104 12:23:08.797202 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:08.893181 systemd-networkd[1497]: cilium_host: Link UP Nov 4 12:23:08.893903 systemd-networkd[1497]: cilium_net: Link UP Nov 4 12:23:08.894044 systemd-networkd[1497]: cilium_net: Gained carrier Nov 4 12:23:08.894167 systemd-networkd[1497]: cilium_host: Gained carrier Nov 4 12:23:08.969054 systemd-networkd[1497]: cilium_vxlan: Link UP Nov 4 12:23:08.969062 systemd-networkd[1497]: cilium_vxlan: Gained carrier Nov 4 12:23:09.127510 systemd-networkd[1497]: cilium_host: Gained IPv6LL Nov 4 12:23:09.217389 kernel: NET: Registered PF_ALG protocol family Nov 4 12:23:09.295508 systemd-networkd[1497]: cilium_net: Gained IPv6LL Nov 4 12:23:09.777092 systemd-networkd[1497]: lxc_health: Link UP Nov 4 12:23:09.779948 systemd-networkd[1497]: lxc_health: Gained carrier Nov 4 12:23:09.800304 kubelet[2735]: E1104 12:23:09.799254 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:10.362779 systemd-networkd[1497]: lxc42102e0bbeb0: Link UP Nov 4 12:23:10.371459 kernel: eth0: renamed from tmp09813 Nov 4 12:23:10.374450 systemd-networkd[1497]: lxc42102e0bbeb0: Gained carrier Nov 4 12:23:10.384333 kernel: eth0: renamed from tmpbdb5b Nov 4 12:23:10.386563 systemd-networkd[1497]: lxcc6074d5b91e4: Link UP Nov 4 12:23:10.387857 systemd-networkd[1497]: lxcc6074d5b91e4: Gained carrier Nov 4 12:23:10.903444 systemd-networkd[1497]: lxc_health: Gained IPv6LL Nov 4 12:23:11.031487 systemd-networkd[1497]: cilium_vxlan: Gained IPv6LL Nov 4 12:23:11.671428 systemd-networkd[1497]: lxc42102e0bbeb0: Gained IPv6LL Nov 4 12:23:11.750468 kubelet[2735]: E1104 12:23:11.749542 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:12.056360 systemd-networkd[1497]: lxcc6074d5b91e4: Gained IPv6LL Nov 4 12:23:12.523588 systemd[1]: Started sshd@7-10.0.0.89:22-10.0.0.1:35440.service - OpenSSH per-connection server daemon (10.0.0.1:35440). Nov 4 12:23:12.579754 sshd[3920]: Accepted publickey for core from 10.0.0.1 port 35440 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:12.580793 sshd-session[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:12.586860 systemd-logind[1570]: New session 8 of user core. Nov 4 12:23:12.599462 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 4 12:23:12.723436 sshd[3923]: Connection closed by 10.0.0.1 port 35440 Nov 4 12:23:12.723841 sshd-session[3920]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:12.727871 systemd[1]: sshd@7-10.0.0.89:22-10.0.0.1:35440.service: Deactivated successfully. Nov 4 12:23:12.730995 systemd[1]: session-8.scope: Deactivated successfully. Nov 4 12:23:12.733081 systemd-logind[1570]: Session 8 logged out. Waiting for processes to exit. Nov 4 12:23:12.734365 systemd-logind[1570]: Removed session 8. Nov 4 12:23:13.898989 containerd[1589]: time="2025-11-04T12:23:13.898833521Z" level=info msg="connecting to shim bdb5bcc7c11c7c09892175ebdfe26bc7cf2cac86456f12d900616fae3c78ae2b" address="unix:///run/containerd/s/f33e1997b3fd5a2dea4e24f97223c9a13a49e7e7dec51ad6f68f21d19b470029" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:23:13.900952 containerd[1589]: time="2025-11-04T12:23:13.900156484Z" level=info msg="connecting to shim 09813a9c56a64e4ad2ab9602bc2160bca72c609ea299d24252f4ddf8d8f620dc" address="unix:///run/containerd/s/550067bb51a4719480a172e4f2d78a693c0d502efa9deba9f0a31e8e51cb0036" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:23:13.926673 systemd[1]: Started cri-containerd-bdb5bcc7c11c7c09892175ebdfe26bc7cf2cac86456f12d900616fae3c78ae2b.scope - libcontainer container bdb5bcc7c11c7c09892175ebdfe26bc7cf2cac86456f12d900616fae3c78ae2b. Nov 4 12:23:13.929182 systemd[1]: Started cri-containerd-09813a9c56a64e4ad2ab9602bc2160bca72c609ea299d24252f4ddf8d8f620dc.scope - libcontainer container 09813a9c56a64e4ad2ab9602bc2160bca72c609ea299d24252f4ddf8d8f620dc. Nov 4 12:23:13.943406 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 12:23:13.967947 containerd[1589]: time="2025-11-04T12:23:13.967889027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6zx9q,Uid:305adcb1-222f-41fb-8769-7cabc8fc2a4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"09813a9c56a64e4ad2ab9602bc2160bca72c609ea299d24252f4ddf8d8f620dc\"" Nov 4 12:23:13.972134 kubelet[2735]: E1104 12:23:13.970238 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:13.976235 containerd[1589]: time="2025-11-04T12:23:13.976122169Z" level=info msg="CreateContainer within sandbox \"09813a9c56a64e4ad2ab9602bc2160bca72c609ea299d24252f4ddf8d8f620dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 12:23:13.986854 containerd[1589]: time="2025-11-04T12:23:13.986813118Z" level=info msg="Container 9d9e478fb72d094c57be583dbf1ea8b7c4997e4a1bb34aa0c240a16bfede42b9: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:23:13.999127 containerd[1589]: time="2025-11-04T12:23:13.999071991Z" level=info msg="CreateContainer within sandbox \"09813a9c56a64e4ad2ab9602bc2160bca72c609ea299d24252f4ddf8d8f620dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9d9e478fb72d094c57be583dbf1ea8b7c4997e4a1bb34aa0c240a16bfede42b9\"" Nov 4 12:23:14.000526 containerd[1589]: time="2025-11-04T12:23:14.000502875Z" level=info msg="StartContainer for \"9d9e478fb72d094c57be583dbf1ea8b7c4997e4a1bb34aa0c240a16bfede42b9\"" Nov 4 12:23:14.001432 containerd[1589]: time="2025-11-04T12:23:14.001392558Z" level=info msg="connecting to shim 9d9e478fb72d094c57be583dbf1ea8b7c4997e4a1bb34aa0c240a16bfede42b9" address="unix:///run/containerd/s/550067bb51a4719480a172e4f2d78a693c0d502efa9deba9f0a31e8e51cb0036" protocol=ttrpc version=3 Nov 4 12:23:14.013544 systemd-resolved[1279]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 4 12:23:14.024444 systemd[1]: Started cri-containerd-9d9e478fb72d094c57be583dbf1ea8b7c4997e4a1bb34aa0c240a16bfede42b9.scope - libcontainer container 9d9e478fb72d094c57be583dbf1ea8b7c4997e4a1bb34aa0c240a16bfede42b9. Nov 4 12:23:14.036917 containerd[1589]: time="2025-11-04T12:23:14.036155005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-8c4fg,Uid:7aae3451-4ac8-4d01-bbef-a97ea9b1db0c,Namespace:kube-system,Attempt:0,} returns sandbox id \"bdb5bcc7c11c7c09892175ebdfe26bc7cf2cac86456f12d900616fae3c78ae2b\"" Nov 4 12:23:14.037007 kubelet[2735]: E1104 12:23:14.036990 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:14.040508 containerd[1589]: time="2025-11-04T12:23:14.040471056Z" level=info msg="CreateContainer within sandbox \"bdb5bcc7c11c7c09892175ebdfe26bc7cf2cac86456f12d900616fae3c78ae2b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 4 12:23:14.049981 containerd[1589]: time="2025-11-04T12:23:14.049945960Z" level=info msg="Container 03860069fecc1adba4e707bc78317eea15ec37563e883f5a195253aee9c47b8f: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:23:14.057132 containerd[1589]: time="2025-11-04T12:23:14.057074418Z" level=info msg="CreateContainer within sandbox \"bdb5bcc7c11c7c09892175ebdfe26bc7cf2cac86456f12d900616fae3c78ae2b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"03860069fecc1adba4e707bc78317eea15ec37563e883f5a195253aee9c47b8f\"" Nov 4 12:23:14.058621 containerd[1589]: time="2025-11-04T12:23:14.058476702Z" level=info msg="StartContainer for \"03860069fecc1adba4e707bc78317eea15ec37563e883f5a195253aee9c47b8f\"" Nov 4 12:23:14.059011 containerd[1589]: time="2025-11-04T12:23:14.058938623Z" level=info msg="StartContainer for \"9d9e478fb72d094c57be583dbf1ea8b7c4997e4a1bb34aa0c240a16bfede42b9\" returns successfully" Nov 4 12:23:14.059659 containerd[1589]: time="2025-11-04T12:23:14.059561825Z" level=info msg="connecting to shim 03860069fecc1adba4e707bc78317eea15ec37563e883f5a195253aee9c47b8f" address="unix:///run/containerd/s/f33e1997b3fd5a2dea4e24f97223c9a13a49e7e7dec51ad6f68f21d19b470029" protocol=ttrpc version=3 Nov 4 12:23:14.084448 systemd[1]: Started cri-containerd-03860069fecc1adba4e707bc78317eea15ec37563e883f5a195253aee9c47b8f.scope - libcontainer container 03860069fecc1adba4e707bc78317eea15ec37563e883f5a195253aee9c47b8f. Nov 4 12:23:14.116895 containerd[1589]: time="2025-11-04T12:23:14.116858450Z" level=info msg="StartContainer for \"03860069fecc1adba4e707bc78317eea15ec37563e883f5a195253aee9c47b8f\" returns successfully" Nov 4 12:23:14.812297 kubelet[2735]: E1104 12:23:14.812241 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:14.815267 kubelet[2735]: E1104 12:23:14.815185 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:14.824504 kubelet[2735]: I1104 12:23:14.824445 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6zx9q" podStartSLOduration=21.824326039 podStartE2EDuration="21.824326039s" podCreationTimestamp="2025-11-04 12:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:23:14.823500677 +0000 UTC m=+28.233791948" watchObservedRunningTime="2025-11-04 12:23:14.824326039 +0000 UTC m=+28.234617350" Nov 4 12:23:14.832671 kubelet[2735]: I1104 12:23:14.832616 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-8c4fg" podStartSLOduration=21.83260162 podStartE2EDuration="21.83260162s" podCreationTimestamp="2025-11-04 12:22:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:23:14.832401659 +0000 UTC m=+28.242692970" watchObservedRunningTime="2025-11-04 12:23:14.83260162 +0000 UTC m=+28.242892931" Nov 4 12:23:15.267341 kubelet[2735]: I1104 12:23:15.267306 2735 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 4 12:23:15.267776 kubelet[2735]: E1104 12:23:15.267753 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:15.817052 kubelet[2735]: E1104 12:23:15.816651 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:15.817052 kubelet[2735]: E1104 12:23:15.816915 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:15.817052 kubelet[2735]: E1104 12:23:15.816979 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:16.818501 kubelet[2735]: E1104 12:23:16.818413 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:17.735566 systemd[1]: Started sshd@8-10.0.0.89:22-10.0.0.1:35454.service - OpenSSH per-connection server daemon (10.0.0.1:35454). Nov 4 12:23:17.788795 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 35454 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:17.790485 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:17.794683 systemd-logind[1570]: New session 9 of user core. Nov 4 12:23:17.805496 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 4 12:23:17.924541 sshd[4121]: Connection closed by 10.0.0.1 port 35454 Nov 4 12:23:17.925141 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:17.928772 systemd[1]: sshd@8-10.0.0.89:22-10.0.0.1:35454.service: Deactivated successfully. Nov 4 12:23:17.930401 systemd[1]: session-9.scope: Deactivated successfully. Nov 4 12:23:17.931000 systemd-logind[1570]: Session 9 logged out. Waiting for processes to exit. Nov 4 12:23:17.933004 systemd-logind[1570]: Removed session 9. Nov 4 12:23:22.942099 systemd[1]: Started sshd@9-10.0.0.89:22-10.0.0.1:47734.service - OpenSSH per-connection server daemon (10.0.0.1:47734). Nov 4 12:23:22.997613 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 47734 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:22.999248 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:23.003103 systemd-logind[1570]: New session 10 of user core. Nov 4 12:23:23.011427 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 4 12:23:23.124303 sshd[4138]: Connection closed by 10.0.0.1 port 47734 Nov 4 12:23:23.124981 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:23.132883 systemd[1]: sshd@9-10.0.0.89:22-10.0.0.1:47734.service: Deactivated successfully. Nov 4 12:23:23.134472 systemd[1]: session-10.scope: Deactivated successfully. Nov 4 12:23:23.136184 systemd-logind[1570]: Session 10 logged out. Waiting for processes to exit. Nov 4 12:23:23.140418 systemd-logind[1570]: Removed session 10. Nov 4 12:23:28.138708 systemd[1]: Started sshd@10-10.0.0.89:22-10.0.0.1:47742.service - OpenSSH per-connection server daemon (10.0.0.1:47742). Nov 4 12:23:28.206920 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 47742 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:28.208450 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:28.212437 systemd-logind[1570]: New session 11 of user core. Nov 4 12:23:28.222485 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 4 12:23:28.337774 sshd[4159]: Connection closed by 10.0.0.1 port 47742 Nov 4 12:23:28.338303 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:28.351901 systemd[1]: sshd@10-10.0.0.89:22-10.0.0.1:47742.service: Deactivated successfully. Nov 4 12:23:28.353778 systemd[1]: session-11.scope: Deactivated successfully. Nov 4 12:23:28.356519 systemd-logind[1570]: Session 11 logged out. Waiting for processes to exit. Nov 4 12:23:28.358057 systemd-logind[1570]: Removed session 11. Nov 4 12:23:28.359999 systemd[1]: Started sshd@11-10.0.0.89:22-10.0.0.1:47752.service - OpenSSH per-connection server daemon (10.0.0.1:47752). Nov 4 12:23:28.426757 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 47752 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:28.427864 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:28.432240 systemd-logind[1570]: New session 12 of user core. Nov 4 12:23:28.447438 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 4 12:23:28.594849 sshd[4178]: Connection closed by 10.0.0.1 port 47752 Nov 4 12:23:28.594405 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:28.611325 systemd[1]: sshd@11-10.0.0.89:22-10.0.0.1:47752.service: Deactivated successfully. Nov 4 12:23:28.613015 systemd[1]: session-12.scope: Deactivated successfully. Nov 4 12:23:28.614859 systemd-logind[1570]: Session 12 logged out. Waiting for processes to exit. Nov 4 12:23:28.617223 systemd[1]: Started sshd@12-10.0.0.89:22-10.0.0.1:47766.service - OpenSSH per-connection server daemon (10.0.0.1:47766). Nov 4 12:23:28.618000 systemd-logind[1570]: Removed session 12. Nov 4 12:23:28.677243 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 47766 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:28.678319 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:28.682358 systemd-logind[1570]: New session 13 of user core. Nov 4 12:23:28.694434 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 4 12:23:28.807948 sshd[4192]: Connection closed by 10.0.0.1 port 47766 Nov 4 12:23:28.807770 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:28.811807 systemd[1]: sshd@12-10.0.0.89:22-10.0.0.1:47766.service: Deactivated successfully. Nov 4 12:23:28.813523 systemd[1]: session-13.scope: Deactivated successfully. Nov 4 12:23:28.815950 systemd-logind[1570]: Session 13 logged out. Waiting for processes to exit. Nov 4 12:23:28.817470 systemd-logind[1570]: Removed session 13. Nov 4 12:23:33.827379 systemd[1]: Started sshd@13-10.0.0.89:22-10.0.0.1:46868.service - OpenSSH per-connection server daemon (10.0.0.1:46868). Nov 4 12:23:33.888188 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 46868 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:33.889612 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:33.894347 systemd-logind[1570]: New session 14 of user core. Nov 4 12:23:33.901431 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 4 12:23:34.023973 sshd[4208]: Connection closed by 10.0.0.1 port 46868 Nov 4 12:23:34.023898 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:34.034252 systemd[1]: sshd@13-10.0.0.89:22-10.0.0.1:46868.service: Deactivated successfully. Nov 4 12:23:34.035843 systemd[1]: session-14.scope: Deactivated successfully. Nov 4 12:23:34.037114 systemd-logind[1570]: Session 14 logged out. Waiting for processes to exit. Nov 4 12:23:34.038986 systemd[1]: Started sshd@14-10.0.0.89:22-10.0.0.1:46874.service - OpenSSH per-connection server daemon (10.0.0.1:46874). Nov 4 12:23:34.039915 systemd-logind[1570]: Removed session 14. Nov 4 12:23:34.096783 sshd[4221]: Accepted publickey for core from 10.0.0.1 port 46874 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:34.098216 sshd-session[4221]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:34.102336 systemd-logind[1570]: New session 15 of user core. Nov 4 12:23:34.116451 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 4 12:23:34.340963 sshd[4224]: Connection closed by 10.0.0.1 port 46874 Nov 4 12:23:34.341567 sshd-session[4221]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:34.353350 systemd[1]: sshd@14-10.0.0.89:22-10.0.0.1:46874.service: Deactivated successfully. Nov 4 12:23:34.354818 systemd[1]: session-15.scope: Deactivated successfully. Nov 4 12:23:34.355466 systemd-logind[1570]: Session 15 logged out. Waiting for processes to exit. Nov 4 12:23:34.357196 systemd[1]: Started sshd@15-10.0.0.89:22-10.0.0.1:46888.service - OpenSSH per-connection server daemon (10.0.0.1:46888). Nov 4 12:23:34.361101 systemd-logind[1570]: Removed session 15. Nov 4 12:23:34.413962 sshd[4235]: Accepted publickey for core from 10.0.0.1 port 46888 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:34.415156 sshd-session[4235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:34.419018 systemd-logind[1570]: New session 16 of user core. Nov 4 12:23:34.430441 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 4 12:23:35.040207 sshd[4238]: Connection closed by 10.0.0.1 port 46888 Nov 4 12:23:35.041043 sshd-session[4235]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:35.048233 systemd[1]: sshd@15-10.0.0.89:22-10.0.0.1:46888.service: Deactivated successfully. Nov 4 12:23:35.051841 systemd[1]: session-16.scope: Deactivated successfully. Nov 4 12:23:35.053933 systemd-logind[1570]: Session 16 logged out. Waiting for processes to exit. Nov 4 12:23:35.057719 systemd[1]: Started sshd@16-10.0.0.89:22-10.0.0.1:46902.service - OpenSSH per-connection server daemon (10.0.0.1:46902). Nov 4 12:23:35.058227 systemd-logind[1570]: Removed session 16. Nov 4 12:23:35.123694 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 46902 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:35.124859 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:35.128392 systemd-logind[1570]: New session 17 of user core. Nov 4 12:23:35.134453 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 4 12:23:35.360392 sshd[4260]: Connection closed by 10.0.0.1 port 46902 Nov 4 12:23:35.360203 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:35.373951 systemd[1]: sshd@16-10.0.0.89:22-10.0.0.1:46902.service: Deactivated successfully. Nov 4 12:23:35.375921 systemd[1]: session-17.scope: Deactivated successfully. Nov 4 12:23:35.376966 systemd-logind[1570]: Session 17 logged out. Waiting for processes to exit. Nov 4 12:23:35.379762 systemd[1]: Started sshd@17-10.0.0.89:22-10.0.0.1:46912.service - OpenSSH per-connection server daemon (10.0.0.1:46912). Nov 4 12:23:35.380427 systemd-logind[1570]: Removed session 17. Nov 4 12:23:35.441640 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 46912 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:35.442768 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:35.446585 systemd-logind[1570]: New session 18 of user core. Nov 4 12:23:35.457443 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 4 12:23:35.566180 sshd[4275]: Connection closed by 10.0.0.1 port 46912 Nov 4 12:23:35.566520 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:35.570175 systemd[1]: sshd@17-10.0.0.89:22-10.0.0.1:46912.service: Deactivated successfully. Nov 4 12:23:35.574240 systemd[1]: session-18.scope: Deactivated successfully. Nov 4 12:23:35.575876 systemd-logind[1570]: Session 18 logged out. Waiting for processes to exit. Nov 4 12:23:35.577030 systemd-logind[1570]: Removed session 18. Nov 4 12:23:40.581844 systemd[1]: Started sshd@18-10.0.0.89:22-10.0.0.1:57770.service - OpenSSH per-connection server daemon (10.0.0.1:57770). Nov 4 12:23:40.630451 sshd[4293]: Accepted publickey for core from 10.0.0.1 port 57770 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:40.631639 sshd-session[4293]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:40.636017 systemd-logind[1570]: New session 19 of user core. Nov 4 12:23:40.644466 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 4 12:23:40.758198 sshd[4296]: Connection closed by 10.0.0.1 port 57770 Nov 4 12:23:40.758539 sshd-session[4293]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:40.762194 systemd[1]: sshd@18-10.0.0.89:22-10.0.0.1:57770.service: Deactivated successfully. Nov 4 12:23:40.763985 systemd[1]: session-19.scope: Deactivated successfully. Nov 4 12:23:40.765791 systemd-logind[1570]: Session 19 logged out. Waiting for processes to exit. Nov 4 12:23:40.766973 systemd-logind[1570]: Removed session 19. Nov 4 12:23:45.769476 systemd[1]: Started sshd@19-10.0.0.89:22-10.0.0.1:57786.service - OpenSSH per-connection server daemon (10.0.0.1:57786). Nov 4 12:23:45.829598 sshd[4309]: Accepted publickey for core from 10.0.0.1 port 57786 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:45.830737 sshd-session[4309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:45.834350 systemd-logind[1570]: New session 20 of user core. Nov 4 12:23:45.841409 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 4 12:23:45.956333 sshd[4312]: Connection closed by 10.0.0.1 port 57786 Nov 4 12:23:45.956841 sshd-session[4309]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:45.960434 systemd[1]: sshd@19-10.0.0.89:22-10.0.0.1:57786.service: Deactivated successfully. Nov 4 12:23:45.962059 systemd[1]: session-20.scope: Deactivated successfully. Nov 4 12:23:45.963033 systemd-logind[1570]: Session 20 logged out. Waiting for processes to exit. Nov 4 12:23:45.964252 systemd-logind[1570]: Removed session 20. Nov 4 12:23:50.972055 systemd[1]: Started sshd@20-10.0.0.89:22-10.0.0.1:54846.service - OpenSSH per-connection server daemon (10.0.0.1:54846). Nov 4 12:23:51.020061 sshd[4328]: Accepted publickey for core from 10.0.0.1 port 54846 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:51.021337 sshd-session[4328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:51.026003 systemd-logind[1570]: New session 21 of user core. Nov 4 12:23:51.033442 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 4 12:23:51.145612 sshd[4331]: Connection closed by 10.0.0.1 port 54846 Nov 4 12:23:51.145953 sshd-session[4328]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:51.157480 systemd[1]: sshd@20-10.0.0.89:22-10.0.0.1:54846.service: Deactivated successfully. Nov 4 12:23:51.159410 systemd[1]: session-21.scope: Deactivated successfully. Nov 4 12:23:51.160322 systemd-logind[1570]: Session 21 logged out. Waiting for processes to exit. Nov 4 12:23:51.164310 systemd[1]: Started sshd@21-10.0.0.89:22-10.0.0.1:54858.service - OpenSSH per-connection server daemon (10.0.0.1:54858). Nov 4 12:23:51.165580 systemd-logind[1570]: Removed session 21. Nov 4 12:23:51.228511 sshd[4345]: Accepted publickey for core from 10.0.0.1 port 54858 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:51.229688 sshd-session[4345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:51.234354 systemd-logind[1570]: New session 22 of user core. Nov 4 12:23:51.248489 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 4 12:23:52.829241 containerd[1589]: time="2025-11-04T12:23:52.829149687Z" level=info msg="StopContainer for \"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\" with timeout 30 (s)" Nov 4 12:23:52.830752 containerd[1589]: time="2025-11-04T12:23:52.830710291Z" level=info msg="Stop container \"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\" with signal terminated" Nov 4 12:23:52.846430 systemd[1]: cri-containerd-0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64.scope: Deactivated successfully. Nov 4 12:23:52.855985 containerd[1589]: time="2025-11-04T12:23:52.855544664Z" level=info msg="received exit event container_id:\"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\" id:\"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\" pid:3157 exited_at:{seconds:1762259032 nanos:855035183}" Nov 4 12:23:52.856449 containerd[1589]: time="2025-11-04T12:23:52.855568024Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\" id:\"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\" pid:3157 exited_at:{seconds:1762259032 nanos:855035183}" Nov 4 12:23:52.866268 containerd[1589]: time="2025-11-04T12:23:52.866237647Z" level=info msg="TaskExit event in podsandbox handler container_id:\"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\" id:\"c0ec0e7f5fdeb0d4ad1fef1329a6e0a4bc7e8d67e7482560c7de5dffd7aae51d\" pid:4376 exited_at:{seconds:1762259032 nanos:865689886}" Nov 4 12:23:52.868068 containerd[1589]: time="2025-11-04T12:23:52.868043811Z" level=info msg="StopContainer for \"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\" with timeout 2 (s)" Nov 4 12:23:52.868386 containerd[1589]: time="2025-11-04T12:23:52.868365132Z" level=info msg="Stop container \"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\" with signal terminated" Nov 4 12:23:52.873901 systemd-networkd[1497]: lxc_health: Link DOWN Nov 4 12:23:52.873908 systemd-networkd[1497]: lxc_health: Lost carrier Nov 4 12:23:52.890893 systemd[1]: cri-containerd-053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f.scope: Deactivated successfully. Nov 4 12:23:52.891202 systemd[1]: cri-containerd-053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f.scope: Consumed 6.063s CPU time, 121.3M memory peak, 144K read from disk, 12.9M written to disk. Nov 4 12:23:52.892528 containerd[1589]: time="2025-11-04T12:23:52.892496943Z" level=info msg="TaskExit event in podsandbox handler container_id:\"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\" id:\"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\" pid:3405 exited_at:{seconds:1762259032 nanos:892257583}" Nov 4 12:23:52.892582 containerd[1589]: time="2025-11-04T12:23:52.892571864Z" level=info msg="received exit event container_id:\"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\" id:\"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\" pid:3405 exited_at:{seconds:1762259032 nanos:892257583}" Nov 4 12:23:52.896949 containerd[1589]: time="2025-11-04T12:23:52.896909393Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 4 12:23:52.905710 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64-rootfs.mount: Deactivated successfully. Nov 4 12:23:52.912728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f-rootfs.mount: Deactivated successfully. Nov 4 12:23:52.926060 containerd[1589]: time="2025-11-04T12:23:52.925914855Z" level=info msg="StopContainer for \"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\" returns successfully" Nov 4 12:23:52.929234 containerd[1589]: time="2025-11-04T12:23:52.929185462Z" level=info msg="StopPodSandbox for \"3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca\"" Nov 4 12:23:52.929381 containerd[1589]: time="2025-11-04T12:23:52.929354903Z" level=info msg="Container to stop \"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 12:23:52.932734 containerd[1589]: time="2025-11-04T12:23:52.932687070Z" level=info msg="StopContainer for \"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\" returns successfully" Nov 4 12:23:52.933403 containerd[1589]: time="2025-11-04T12:23:52.933209231Z" level=info msg="StopPodSandbox for \"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\"" Nov 4 12:23:52.933403 containerd[1589]: time="2025-11-04T12:23:52.933329471Z" level=info msg="Container to stop \"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 12:23:52.933403 containerd[1589]: time="2025-11-04T12:23:52.933345111Z" level=info msg="Container to stop \"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 12:23:52.933403 containerd[1589]: time="2025-11-04T12:23:52.933361271Z" level=info msg="Container to stop \"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 12:23:52.933627 containerd[1589]: time="2025-11-04T12:23:52.933370351Z" level=info msg="Container to stop \"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 12:23:52.933627 containerd[1589]: time="2025-11-04T12:23:52.933482232Z" level=info msg="Container to stop \"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 4 12:23:52.939645 systemd[1]: cri-containerd-3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca.scope: Deactivated successfully. Nov 4 12:23:52.942159 containerd[1589]: time="2025-11-04T12:23:52.942119890Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca\" id:\"3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca\" pid:2857 exit_status:137 exited_at:{seconds:1762259032 nanos:941684329}" Nov 4 12:23:52.948587 systemd[1]: cri-containerd-05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa.scope: Deactivated successfully. Nov 4 12:23:52.965621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa-rootfs.mount: Deactivated successfully. Nov 4 12:23:52.972124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca-rootfs.mount: Deactivated successfully. Nov 4 12:23:52.981015 containerd[1589]: time="2025-11-04T12:23:52.980959414Z" level=info msg="shim disconnected" id=05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa namespace=k8s.io Nov 4 12:23:52.981139 containerd[1589]: time="2025-11-04T12:23:52.981011574Z" level=warning msg="cleaning up after shim disconnected" id=05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa namespace=k8s.io Nov 4 12:23:52.981139 containerd[1589]: time="2025-11-04T12:23:52.981051934Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 4 12:23:52.995627 containerd[1589]: time="2025-11-04T12:23:52.980979294Z" level=error msg="ttrpc: received message on inactive stream" stream=57 Nov 4 12:23:52.995627 containerd[1589]: time="2025-11-04T12:23:52.995481645Z" level=info msg="shim disconnected" id=3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca namespace=k8s.io Nov 4 12:23:52.995627 containerd[1589]: time="2025-11-04T12:23:52.995502405Z" level=warning msg="cleaning up after shim disconnected" id=3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca namespace=k8s.io Nov 4 12:23:52.995627 containerd[1589]: time="2025-11-04T12:23:52.995509885Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 4 12:23:52.999324 containerd[1589]: time="2025-11-04T12:23:52.998003050Z" level=info msg="TearDown network for sandbox \"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\" successfully" Nov 4 12:23:52.999324 containerd[1589]: time="2025-11-04T12:23:52.998029970Z" level=info msg="StopPodSandbox for \"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\" returns successfully" Nov 4 12:23:52.999827 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa-shm.mount: Deactivated successfully. Nov 4 12:23:53.003305 containerd[1589]: time="2025-11-04T12:23:53.003254772Z" level=info msg="received exit event sandbox_id:\"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\" exit_status:137 exited_at:{seconds:1762259032 nanos:950166308}" Nov 4 12:23:53.010157 containerd[1589]: time="2025-11-04T12:23:53.010121251Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\" id:\"05d8b83a0a8a2a92abe880eae49db50bf0457f0b7b7d4b79268dbd60aacf3caa\" pid:2945 exit_status:137 exited_at:{seconds:1762259032 nanos:950166308}" Nov 4 12:23:53.010242 containerd[1589]: time="2025-11-04T12:23:53.010170612Z" level=info msg="received exit event sandbox_id:\"3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca\" exit_status:137 exited_at:{seconds:1762259032 nanos:941684329}" Nov 4 12:23:53.010732 containerd[1589]: time="2025-11-04T12:23:53.010628500Z" level=info msg="TearDown network for sandbox \"3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca\" successfully" Nov 4 12:23:53.010732 containerd[1589]: time="2025-11-04T12:23:53.010656101Z" level=info msg="StopPodSandbox for \"3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca\" returns successfully" Nov 4 12:23:53.121366 kubelet[2735]: I1104 12:23:53.121241 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-cilium-cgroup\") pod \"30dad178-8cfb-42e8-9abf-e5daba536063\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " Nov 4 12:23:53.121366 kubelet[2735]: I1104 12:23:53.121300 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-etc-cni-netd\") pod \"30dad178-8cfb-42e8-9abf-e5daba536063\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " Nov 4 12:23:53.121366 kubelet[2735]: I1104 12:23:53.121319 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-bpf-maps\") pod \"30dad178-8cfb-42e8-9abf-e5daba536063\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " Nov 4 12:23:53.121366 kubelet[2735]: I1104 12:23:53.121338 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/30dad178-8cfb-42e8-9abf-e5daba536063-clustermesh-secrets\") pod \"30dad178-8cfb-42e8-9abf-e5daba536063\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " Nov 4 12:23:53.123291 kubelet[2735]: I1104 12:23:53.121354 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-xtables-lock\") pod \"30dad178-8cfb-42e8-9abf-e5daba536063\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " Nov 4 12:23:53.123291 kubelet[2735]: I1104 12:23:53.122780 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49984\" (UniqueName: \"kubernetes.io/projected/c912f37b-377f-476e-8fc9-86ebf83bccb9-kube-api-access-49984\") pod \"c912f37b-377f-476e-8fc9-86ebf83bccb9\" (UID: \"c912f37b-377f-476e-8fc9-86ebf83bccb9\") " Nov 4 12:23:53.123291 kubelet[2735]: I1104 12:23:53.122806 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-cni-path\") pod \"30dad178-8cfb-42e8-9abf-e5daba536063\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " Nov 4 12:23:53.123291 kubelet[2735]: I1104 12:23:53.122830 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-cilium-run\") pod \"30dad178-8cfb-42e8-9abf-e5daba536063\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " Nov 4 12:23:53.123291 kubelet[2735]: I1104 12:23:53.122850 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cld75\" (UniqueName: \"kubernetes.io/projected/30dad178-8cfb-42e8-9abf-e5daba536063-kube-api-access-cld75\") pod \"30dad178-8cfb-42e8-9abf-e5daba536063\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " Nov 4 12:23:53.123291 kubelet[2735]: I1104 12:23:53.122867 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-host-proc-sys-kernel\") pod \"30dad178-8cfb-42e8-9abf-e5daba536063\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " Nov 4 12:23:53.123454 kubelet[2735]: I1104 12:23:53.122882 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/30dad178-8cfb-42e8-9abf-e5daba536063-hubble-tls\") pod \"30dad178-8cfb-42e8-9abf-e5daba536063\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " Nov 4 12:23:53.123454 kubelet[2735]: I1104 12:23:53.122906 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-lib-modules\") pod \"30dad178-8cfb-42e8-9abf-e5daba536063\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " Nov 4 12:23:53.123454 kubelet[2735]: I1104 12:23:53.122920 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-host-proc-sys-net\") pod \"30dad178-8cfb-42e8-9abf-e5daba536063\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " Nov 4 12:23:53.123454 kubelet[2735]: I1104 12:23:53.122936 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c912f37b-377f-476e-8fc9-86ebf83bccb9-cilium-config-path\") pod \"c912f37b-377f-476e-8fc9-86ebf83bccb9\" (UID: \"c912f37b-377f-476e-8fc9-86ebf83bccb9\") " Nov 4 12:23:53.123454 kubelet[2735]: I1104 12:23:53.122954 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30dad178-8cfb-42e8-9abf-e5daba536063-cilium-config-path\") pod \"30dad178-8cfb-42e8-9abf-e5daba536063\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " Nov 4 12:23:53.123454 kubelet[2735]: I1104 12:23:53.122983 2735 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-hostproc\") pod \"30dad178-8cfb-42e8-9abf-e5daba536063\" (UID: \"30dad178-8cfb-42e8-9abf-e5daba536063\") " Nov 4 12:23:53.125768 kubelet[2735]: I1104 12:23:53.125735 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-hostproc" (OuterVolumeSpecName: "hostproc") pod "30dad178-8cfb-42e8-9abf-e5daba536063" (UID: "30dad178-8cfb-42e8-9abf-e5daba536063"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:23:53.125827 kubelet[2735]: I1104 12:23:53.125736 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "30dad178-8cfb-42e8-9abf-e5daba536063" (UID: "30dad178-8cfb-42e8-9abf-e5daba536063"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:23:53.125827 kubelet[2735]: I1104 12:23:53.125793 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "30dad178-8cfb-42e8-9abf-e5daba536063" (UID: "30dad178-8cfb-42e8-9abf-e5daba536063"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:23:53.125935 kubelet[2735]: I1104 12:23:53.125914 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "30dad178-8cfb-42e8-9abf-e5daba536063" (UID: "30dad178-8cfb-42e8-9abf-e5daba536063"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:23:53.126008 kubelet[2735]: I1104 12:23:53.125995 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "30dad178-8cfb-42e8-9abf-e5daba536063" (UID: "30dad178-8cfb-42e8-9abf-e5daba536063"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:23:53.126127 kubelet[2735]: I1104 12:23:53.126063 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-cni-path" (OuterVolumeSpecName: "cni-path") pod "30dad178-8cfb-42e8-9abf-e5daba536063" (UID: "30dad178-8cfb-42e8-9abf-e5daba536063"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:23:53.126182 kubelet[2735]: I1104 12:23:53.126075 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "30dad178-8cfb-42e8-9abf-e5daba536063" (UID: "30dad178-8cfb-42e8-9abf-e5daba536063"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:23:53.126225 kubelet[2735]: I1104 12:23:53.126092 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "30dad178-8cfb-42e8-9abf-e5daba536063" (UID: "30dad178-8cfb-42e8-9abf-e5daba536063"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:23:53.126273 kubelet[2735]: I1104 12:23:53.126110 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "30dad178-8cfb-42e8-9abf-e5daba536063" (UID: "30dad178-8cfb-42e8-9abf-e5daba536063"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:23:53.127301 kubelet[2735]: I1104 12:23:53.126977 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "30dad178-8cfb-42e8-9abf-e5daba536063" (UID: "30dad178-8cfb-42e8-9abf-e5daba536063"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 4 12:23:53.128509 kubelet[2735]: I1104 12:23:53.128485 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c912f37b-377f-476e-8fc9-86ebf83bccb9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c912f37b-377f-476e-8fc9-86ebf83bccb9" (UID: "c912f37b-377f-476e-8fc9-86ebf83bccb9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 12:23:53.129108 kubelet[2735]: I1104 12:23:53.128609 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30dad178-8cfb-42e8-9abf-e5daba536063-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "30dad178-8cfb-42e8-9abf-e5daba536063" (UID: "30dad178-8cfb-42e8-9abf-e5daba536063"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 12:23:53.129108 kubelet[2735]: I1104 12:23:53.128738 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30dad178-8cfb-42e8-9abf-e5daba536063-kube-api-access-cld75" (OuterVolumeSpecName: "kube-api-access-cld75") pod "30dad178-8cfb-42e8-9abf-e5daba536063" (UID: "30dad178-8cfb-42e8-9abf-e5daba536063"). InnerVolumeSpecName "kube-api-access-cld75". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 12:23:53.129918 kubelet[2735]: I1104 12:23:53.129877 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c912f37b-377f-476e-8fc9-86ebf83bccb9-kube-api-access-49984" (OuterVolumeSpecName: "kube-api-access-49984") pod "c912f37b-377f-476e-8fc9-86ebf83bccb9" (UID: "c912f37b-377f-476e-8fc9-86ebf83bccb9"). InnerVolumeSpecName "kube-api-access-49984". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 4 12:23:53.130149 kubelet[2735]: I1104 12:23:53.130128 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30dad178-8cfb-42e8-9abf-e5daba536063-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "30dad178-8cfb-42e8-9abf-e5daba536063" (UID: "30dad178-8cfb-42e8-9abf-e5daba536063"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 4 12:23:53.130486 kubelet[2735]: I1104 12:23:53.130462 2735 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30dad178-8cfb-42e8-9abf-e5daba536063-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "30dad178-8cfb-42e8-9abf-e5daba536063" (UID: "30dad178-8cfb-42e8-9abf-e5daba536063"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 4 12:23:53.223487 kubelet[2735]: I1104 12:23:53.223389 2735 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.223487 kubelet[2735]: I1104 12:23:53.223416 2735 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.223487 kubelet[2735]: I1104 12:23:53.223426 2735 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c912f37b-377f-476e-8fc9-86ebf83bccb9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.223487 kubelet[2735]: I1104 12:23:53.223435 2735 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30dad178-8cfb-42e8-9abf-e5daba536063-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.223487 kubelet[2735]: I1104 12:23:53.223444 2735 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.223487 kubelet[2735]: I1104 12:23:53.223451 2735 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.223487 kubelet[2735]: I1104 12:23:53.223458 2735 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.223487 kubelet[2735]: I1104 12:23:53.223464 2735 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.223735 kubelet[2735]: I1104 12:23:53.223471 2735 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/30dad178-8cfb-42e8-9abf-e5daba536063-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.223735 kubelet[2735]: I1104 12:23:53.223479 2735 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.223735 kubelet[2735]: I1104 12:23:53.223486 2735 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-49984\" (UniqueName: \"kubernetes.io/projected/c912f37b-377f-476e-8fc9-86ebf83bccb9-kube-api-access-49984\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.223735 kubelet[2735]: I1104 12:23:53.223493 2735 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.223735 kubelet[2735]: I1104 12:23:53.223500 2735 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.223735 kubelet[2735]: I1104 12:23:53.223509 2735 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cld75\" (UniqueName: \"kubernetes.io/projected/30dad178-8cfb-42e8-9abf-e5daba536063-kube-api-access-cld75\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.223735 kubelet[2735]: I1104 12:23:53.223516 2735 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/30dad178-8cfb-42e8-9abf-e5daba536063-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.223735 kubelet[2735]: I1104 12:23:53.223522 2735 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/30dad178-8cfb-42e8-9abf-e5daba536063-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 4 12:23:53.901734 kubelet[2735]: I1104 12:23:53.901630 2735 scope.go:117] "RemoveContainer" containerID="0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64" Nov 4 12:23:53.904903 systemd[1]: var-lib-kubelet-pods-30dad178\x2d8cfb\x2d42e8\x2d9abf\x2de5daba536063-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcld75.mount: Deactivated successfully. Nov 4 12:23:53.905011 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3fb50c5750a4e7a270242845c1f39e5adfaf2c1c78b6efe5cb9bed7f60e067ca-shm.mount: Deactivated successfully. Nov 4 12:23:53.905063 systemd[1]: var-lib-kubelet-pods-c912f37b\x2d377f\x2d476e\x2d8fc9\x2d86ebf83bccb9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d49984.mount: Deactivated successfully. Nov 4 12:23:53.905121 systemd[1]: var-lib-kubelet-pods-30dad178\x2d8cfb\x2d42e8\x2d9abf\x2de5daba536063-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 4 12:23:53.905172 systemd[1]: var-lib-kubelet-pods-30dad178\x2d8cfb\x2d42e8\x2d9abf\x2de5daba536063-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 4 12:23:53.908120 containerd[1589]: time="2025-11-04T12:23:53.907537387Z" level=info msg="RemoveContainer for \"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\"" Nov 4 12:23:53.909295 systemd[1]: Removed slice kubepods-besteffort-podc912f37b_377f_476e_8fc9_86ebf83bccb9.slice - libcontainer container kubepods-besteffort-podc912f37b_377f_476e_8fc9_86ebf83bccb9.slice. Nov 4 12:23:53.912159 containerd[1589]: time="2025-11-04T12:23:53.912057745Z" level=info msg="RemoveContainer for \"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\" returns successfully" Nov 4 12:23:53.912457 kubelet[2735]: I1104 12:23:53.912437 2735 scope.go:117] "RemoveContainer" containerID="0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64" Nov 4 12:23:53.913471 containerd[1589]: time="2025-11-04T12:23:53.913418049Z" level=error msg="ContainerStatus for \"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\": not found" Nov 4 12:23:53.913959 kubelet[2735]: E1104 12:23:53.913667 2735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\": not found" containerID="0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64" Nov 4 12:23:53.913959 kubelet[2735]: I1104 12:23:53.913800 2735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64"} err="failed to get container status \"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\": rpc error: code = NotFound desc = an error occurred when try to find container \"0a927bc6a366b508c5c36915f38715be87e5eed896cb0836bfa98096c4d83b64\": not found" Nov 4 12:23:53.916814 kubelet[2735]: I1104 12:23:53.916788 2735 scope.go:117] "RemoveContainer" containerID="053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f" Nov 4 12:23:53.920335 containerd[1589]: time="2025-11-04T12:23:53.920301209Z" level=info msg="RemoveContainer for \"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\"" Nov 4 12:23:53.925927 systemd[1]: Removed slice kubepods-burstable-pod30dad178_8cfb_42e8_9abf_e5daba536063.slice - libcontainer container kubepods-burstable-pod30dad178_8cfb_42e8_9abf_e5daba536063.slice. Nov 4 12:23:53.926037 systemd[1]: kubepods-burstable-pod30dad178_8cfb_42e8_9abf_e5daba536063.slice: Consumed 6.142s CPU time, 121.6M memory peak, 152K read from disk, 12.9M written to disk. Nov 4 12:23:53.930576 containerd[1589]: time="2025-11-04T12:23:53.929224044Z" level=info msg="RemoveContainer for \"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\" returns successfully" Nov 4 12:23:53.931495 kubelet[2735]: I1104 12:23:53.931470 2735 scope.go:117] "RemoveContainer" containerID="0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69" Nov 4 12:23:53.933630 containerd[1589]: time="2025-11-04T12:23:53.933601560Z" level=info msg="RemoveContainer for \"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69\"" Nov 4 12:23:53.937567 containerd[1589]: time="2025-11-04T12:23:53.937526429Z" level=info msg="RemoveContainer for \"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69\" returns successfully" Nov 4 12:23:53.937866 kubelet[2735]: I1104 12:23:53.937844 2735 scope.go:117] "RemoveContainer" containerID="8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5" Nov 4 12:23:53.942158 containerd[1589]: time="2025-11-04T12:23:53.941969346Z" level=info msg="RemoveContainer for \"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5\"" Nov 4 12:23:53.946104 containerd[1589]: time="2025-11-04T12:23:53.946076497Z" level=info msg="RemoveContainer for \"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5\" returns successfully" Nov 4 12:23:53.946312 kubelet[2735]: I1104 12:23:53.946241 2735 scope.go:117] "RemoveContainer" containerID="0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195" Nov 4 12:23:53.947879 containerd[1589]: time="2025-11-04T12:23:53.947854288Z" level=info msg="RemoveContainer for \"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195\"" Nov 4 12:23:53.950596 containerd[1589]: time="2025-11-04T12:23:53.950571816Z" level=info msg="RemoveContainer for \"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195\" returns successfully" Nov 4 12:23:53.950783 kubelet[2735]: I1104 12:23:53.950748 2735 scope.go:117] "RemoveContainer" containerID="8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92" Nov 4 12:23:53.951996 containerd[1589]: time="2025-11-04T12:23:53.951963760Z" level=info msg="RemoveContainer for \"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92\"" Nov 4 12:23:53.954688 containerd[1589]: time="2025-11-04T12:23:53.954617726Z" level=info msg="RemoveContainer for \"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92\" returns successfully" Nov 4 12:23:53.954855 kubelet[2735]: I1104 12:23:53.954779 2735 scope.go:117] "RemoveContainer" containerID="053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f" Nov 4 12:23:53.954993 containerd[1589]: time="2025-11-04T12:23:53.954954492Z" level=error msg="ContainerStatus for \"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\": not found" Nov 4 12:23:53.955118 kubelet[2735]: E1104 12:23:53.955095 2735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\": not found" containerID="053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f" Nov 4 12:23:53.955204 kubelet[2735]: I1104 12:23:53.955161 2735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f"} err="failed to get container status \"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"053239e8f24c415d022567f9b5295726b840222dd11a57e281398604363d5b7f\": not found" Nov 4 12:23:53.955204 kubelet[2735]: I1104 12:23:53.955188 2735 scope.go:117] "RemoveContainer" containerID="0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69" Nov 4 12:23:53.955436 containerd[1589]: time="2025-11-04T12:23:53.955368659Z" level=error msg="ContainerStatus for \"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69\": not found" Nov 4 12:23:53.955604 kubelet[2735]: E1104 12:23:53.955577 2735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69\": not found" containerID="0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69" Nov 4 12:23:53.955691 kubelet[2735]: I1104 12:23:53.955671 2735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69"} err="failed to get container status \"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69\": rpc error: code = NotFound desc = an error occurred when try to find container \"0014509573a55856349d9cd9b12910a60121dbc2d37c1ed27a544ab9db7c6c69\": not found" Nov 4 12:23:53.955753 kubelet[2735]: I1104 12:23:53.955742 2735 scope.go:117] "RemoveContainer" containerID="8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5" Nov 4 12:23:53.956042 containerd[1589]: time="2025-11-04T12:23:53.956012870Z" level=error msg="ContainerStatus for \"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5\": not found" Nov 4 12:23:53.956133 kubelet[2735]: E1104 12:23:53.956114 2735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5\": not found" containerID="8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5" Nov 4 12:23:53.956165 kubelet[2735]: I1104 12:23:53.956137 2735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5"} err="failed to get container status \"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5\": rpc error: code = NotFound desc = an error occurred when try to find container \"8f6ba5ea2cfc1c6c87539d7f732bd90c833c68b27c5b9b46cfed5cb55f661cb5\": not found" Nov 4 12:23:53.956165 kubelet[2735]: I1104 12:23:53.956151 2735 scope.go:117] "RemoveContainer" containerID="0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195" Nov 4 12:23:53.956481 containerd[1589]: time="2025-11-04T12:23:53.956444278Z" level=error msg="ContainerStatus for \"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195\": not found" Nov 4 12:23:53.956684 kubelet[2735]: E1104 12:23:53.956661 2735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195\": not found" containerID="0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195" Nov 4 12:23:53.956738 kubelet[2735]: I1104 12:23:53.956684 2735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195"} err="failed to get container status \"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195\": rpc error: code = NotFound desc = an error occurred when try to find container \"0baf9826847a9c2ebe4a66e472f0b539e222a692ca0a171831eddd4d34226195\": not found" Nov 4 12:23:53.956738 kubelet[2735]: I1104 12:23:53.956698 2735 scope.go:117] "RemoveContainer" containerID="8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92" Nov 4 12:23:53.956867 containerd[1589]: time="2025-11-04T12:23:53.956845845Z" level=error msg="ContainerStatus for \"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92\": not found" Nov 4 12:23:53.957032 kubelet[2735]: E1104 12:23:53.957014 2735 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92\": not found" containerID="8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92" Nov 4 12:23:53.957077 kubelet[2735]: I1104 12:23:53.957032 2735 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92"} err="failed to get container status \"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e8bc604d2d371a0cf92166c8fc1ba414874bb37a0b6cc5825dec2eb97f88c92\": not found" Nov 4 12:23:54.701796 kubelet[2735]: I1104 12:23:54.701028 2735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30dad178-8cfb-42e8-9abf-e5daba536063" path="/var/lib/kubelet/pods/30dad178-8cfb-42e8-9abf-e5daba536063/volumes" Nov 4 12:23:54.701796 kubelet[2735]: I1104 12:23:54.701556 2735 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c912f37b-377f-476e-8fc9-86ebf83bccb9" path="/var/lib/kubelet/pods/c912f37b-377f-476e-8fc9-86ebf83bccb9/volumes" Nov 4 12:23:54.785321 sshd[4348]: Connection closed by 10.0.0.1 port 54858 Nov 4 12:23:54.785766 sshd-session[4345]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:54.803555 systemd[1]: sshd@21-10.0.0.89:22-10.0.0.1:54858.service: Deactivated successfully. Nov 4 12:23:54.805106 systemd[1]: session-22.scope: Deactivated successfully. Nov 4 12:23:54.805968 systemd-logind[1570]: Session 22 logged out. Waiting for processes to exit. Nov 4 12:23:54.808365 systemd[1]: Started sshd@22-10.0.0.89:22-10.0.0.1:54860.service - OpenSSH per-connection server daemon (10.0.0.1:54860). Nov 4 12:23:54.808981 systemd-logind[1570]: Removed session 22. Nov 4 12:23:54.863414 sshd[4505]: Accepted publickey for core from 10.0.0.1 port 54860 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:54.864761 sshd-session[4505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:54.869337 systemd-logind[1570]: New session 23 of user core. Nov 4 12:23:54.876432 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 4 12:23:56.741947 kubelet[2735]: E1104 12:23:56.741901 2735 kubelet.go:3011] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 4 12:23:56.808209 sshd[4509]: Connection closed by 10.0.0.1 port 54860 Nov 4 12:23:56.808374 sshd-session[4505]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:56.817458 systemd[1]: sshd@22-10.0.0.89:22-10.0.0.1:54860.service: Deactivated successfully. Nov 4 12:23:56.819069 systemd[1]: session-23.scope: Deactivated successfully. Nov 4 12:23:56.819243 systemd[1]: session-23.scope: Consumed 1.858s CPU time, 25.9M memory peak. Nov 4 12:23:56.819786 systemd-logind[1570]: Session 23 logged out. Waiting for processes to exit. Nov 4 12:23:56.823829 systemd[1]: Started sshd@23-10.0.0.89:22-10.0.0.1:54868.service - OpenSSH per-connection server daemon (10.0.0.1:54868). Nov 4 12:23:56.826509 systemd-logind[1570]: Removed session 23. Nov 4 12:23:56.859125 systemd[1]: Created slice kubepods-burstable-podb1eb8f84_9430_4eae_b3bd_3f361172e9d6.slice - libcontainer container kubepods-burstable-podb1eb8f84_9430_4eae_b3bd_3f361172e9d6.slice. Nov 4 12:23:56.899758 sshd[4521]: Accepted publickey for core from 10.0.0.1 port 54868 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:56.900900 sshd-session[4521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:56.905377 systemd-logind[1570]: New session 24 of user core. Nov 4 12:23:56.917434 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 4 12:23:56.944298 kubelet[2735]: I1104 12:23:56.944211 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4tmw\" (UniqueName: \"kubernetes.io/projected/b1eb8f84-9430-4eae-b3bd-3f361172e9d6-kube-api-access-l4tmw\") pod \"cilium-b7czv\" (UID: \"b1eb8f84-9430-4eae-b3bd-3f361172e9d6\") " pod="kube-system/cilium-b7czv" Nov 4 12:23:56.944298 kubelet[2735]: I1104 12:23:56.944248 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b1eb8f84-9430-4eae-b3bd-3f361172e9d6-xtables-lock\") pod \"cilium-b7czv\" (UID: \"b1eb8f84-9430-4eae-b3bd-3f361172e9d6\") " pod="kube-system/cilium-b7czv" Nov 4 12:23:56.944298 kubelet[2735]: I1104 12:23:56.944264 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b1eb8f84-9430-4eae-b3bd-3f361172e9d6-host-proc-sys-kernel\") pod \"cilium-b7czv\" (UID: \"b1eb8f84-9430-4eae-b3bd-3f361172e9d6\") " pod="kube-system/cilium-b7czv" Nov 4 12:23:56.944767 kubelet[2735]: I1104 12:23:56.944447 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b1eb8f84-9430-4eae-b3bd-3f361172e9d6-cilium-run\") pod \"cilium-b7czv\" (UID: \"b1eb8f84-9430-4eae-b3bd-3f361172e9d6\") " pod="kube-system/cilium-b7czv" Nov 4 12:23:56.944767 kubelet[2735]: I1104 12:23:56.944476 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b1eb8f84-9430-4eae-b3bd-3f361172e9d6-lib-modules\") pod \"cilium-b7czv\" (UID: \"b1eb8f84-9430-4eae-b3bd-3f361172e9d6\") " pod="kube-system/cilium-b7czv" Nov 4 12:23:56.944767 kubelet[2735]: I1104 12:23:56.944494 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b1eb8f84-9430-4eae-b3bd-3f361172e9d6-hostproc\") pod \"cilium-b7czv\" (UID: \"b1eb8f84-9430-4eae-b3bd-3f361172e9d6\") " pod="kube-system/cilium-b7czv" Nov 4 12:23:56.944767 kubelet[2735]: I1104 12:23:56.944507 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b1eb8f84-9430-4eae-b3bd-3f361172e9d6-cilium-cgroup\") pod \"cilium-b7czv\" (UID: \"b1eb8f84-9430-4eae-b3bd-3f361172e9d6\") " pod="kube-system/cilium-b7czv" Nov 4 12:23:56.944767 kubelet[2735]: I1104 12:23:56.944522 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b1eb8f84-9430-4eae-b3bd-3f361172e9d6-bpf-maps\") pod \"cilium-b7czv\" (UID: \"b1eb8f84-9430-4eae-b3bd-3f361172e9d6\") " pod="kube-system/cilium-b7czv" Nov 4 12:23:56.944767 kubelet[2735]: I1104 12:23:56.944535 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b1eb8f84-9430-4eae-b3bd-3f361172e9d6-cni-path\") pod \"cilium-b7czv\" (UID: \"b1eb8f84-9430-4eae-b3bd-3f361172e9d6\") " pod="kube-system/cilium-b7czv" Nov 4 12:23:56.944904 kubelet[2735]: I1104 12:23:56.944548 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b1eb8f84-9430-4eae-b3bd-3f361172e9d6-host-proc-sys-net\") pod \"cilium-b7czv\" (UID: \"b1eb8f84-9430-4eae-b3bd-3f361172e9d6\") " pod="kube-system/cilium-b7czv" Nov 4 12:23:56.944904 kubelet[2735]: I1104 12:23:56.944627 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b1eb8f84-9430-4eae-b3bd-3f361172e9d6-hubble-tls\") pod \"cilium-b7czv\" (UID: \"b1eb8f84-9430-4eae-b3bd-3f361172e9d6\") " pod="kube-system/cilium-b7czv" Nov 4 12:23:56.944904 kubelet[2735]: I1104 12:23:56.944657 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b1eb8f84-9430-4eae-b3bd-3f361172e9d6-clustermesh-secrets\") pod \"cilium-b7czv\" (UID: \"b1eb8f84-9430-4eae-b3bd-3f361172e9d6\") " pod="kube-system/cilium-b7czv" Nov 4 12:23:56.944904 kubelet[2735]: I1104 12:23:56.944672 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b1eb8f84-9430-4eae-b3bd-3f361172e9d6-cilium-ipsec-secrets\") pod \"cilium-b7czv\" (UID: \"b1eb8f84-9430-4eae-b3bd-3f361172e9d6\") " pod="kube-system/cilium-b7czv" Nov 4 12:23:56.944904 kubelet[2735]: I1104 12:23:56.944686 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b1eb8f84-9430-4eae-b3bd-3f361172e9d6-etc-cni-netd\") pod \"cilium-b7czv\" (UID: \"b1eb8f84-9430-4eae-b3bd-3f361172e9d6\") " pod="kube-system/cilium-b7czv" Nov 4 12:23:56.944998 kubelet[2735]: I1104 12:23:56.944699 2735 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1eb8f84-9430-4eae-b3bd-3f361172e9d6-cilium-config-path\") pod \"cilium-b7czv\" (UID: \"b1eb8f84-9430-4eae-b3bd-3f361172e9d6\") " pod="kube-system/cilium-b7czv" Nov 4 12:23:56.964846 sshd[4524]: Connection closed by 10.0.0.1 port 54868 Nov 4 12:23:56.965440 sshd-session[4521]: pam_unix(sshd:session): session closed for user core Nov 4 12:23:56.974244 systemd[1]: sshd@23-10.0.0.89:22-10.0.0.1:54868.service: Deactivated successfully. Nov 4 12:23:56.975784 systemd[1]: session-24.scope: Deactivated successfully. Nov 4 12:23:56.976412 systemd-logind[1570]: Session 24 logged out. Waiting for processes to exit. Nov 4 12:23:56.978521 systemd[1]: Started sshd@24-10.0.0.89:22-10.0.0.1:54878.service - OpenSSH per-connection server daemon (10.0.0.1:54878). Nov 4 12:23:56.979016 systemd-logind[1570]: Removed session 24. Nov 4 12:23:57.034249 sshd[4531]: Accepted publickey for core from 10.0.0.1 port 54878 ssh2: RSA SHA256:BU+2P8LonXUlklSP4qnprs25Z/jySPnCvqmOgDUDXeU Nov 4 12:23:57.035397 sshd-session[4531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 4 12:23:57.039170 systemd-logind[1570]: New session 25 of user core. Nov 4 12:23:57.047436 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 4 12:23:57.164472 kubelet[2735]: E1104 12:23:57.164427 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:57.165828 containerd[1589]: time="2025-11-04T12:23:57.165494974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b7czv,Uid:b1eb8f84-9430-4eae-b3bd-3f361172e9d6,Namespace:kube-system,Attempt:0,}" Nov 4 12:23:57.180021 containerd[1589]: time="2025-11-04T12:23:57.179642034Z" level=info msg="connecting to shim aed124846ee6e31c79b5672072588422e7882c15daf156721c2fa90af6e05e0b" address="unix:///run/containerd/s/cac6e16fc7116f9236fab29bb6d16fbe0abca11158484ecc397fe8a0afa43029" namespace=k8s.io protocol=ttrpc version=3 Nov 4 12:23:57.212431 systemd[1]: Started cri-containerd-aed124846ee6e31c79b5672072588422e7882c15daf156721c2fa90af6e05e0b.scope - libcontainer container aed124846ee6e31c79b5672072588422e7882c15daf156721c2fa90af6e05e0b. Nov 4 12:23:57.232448 containerd[1589]: time="2025-11-04T12:23:57.232408215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-b7czv,Uid:b1eb8f84-9430-4eae-b3bd-3f361172e9d6,Namespace:kube-system,Attempt:0,} returns sandbox id \"aed124846ee6e31c79b5672072588422e7882c15daf156721c2fa90af6e05e0b\"" Nov 4 12:23:57.233447 kubelet[2735]: E1104 12:23:57.233425 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:57.237392 containerd[1589]: time="2025-11-04T12:23:57.237360412Z" level=info msg="CreateContainer within sandbox \"aed124846ee6e31c79b5672072588422e7882c15daf156721c2fa90af6e05e0b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 4 12:23:57.249425 containerd[1589]: time="2025-11-04T12:23:57.249386079Z" level=info msg="Container b38ad2510c9d82496f50e8962dcb662625691ef6dfec9a0ba2a4098e9817ee58: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:23:57.256313 containerd[1589]: time="2025-11-04T12:23:57.256266866Z" level=info msg="CreateContainer within sandbox \"aed124846ee6e31c79b5672072588422e7882c15daf156721c2fa90af6e05e0b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b38ad2510c9d82496f50e8962dcb662625691ef6dfec9a0ba2a4098e9817ee58\"" Nov 4 12:23:57.257171 containerd[1589]: time="2025-11-04T12:23:57.257147280Z" level=info msg="StartContainer for \"b38ad2510c9d82496f50e8962dcb662625691ef6dfec9a0ba2a4098e9817ee58\"" Nov 4 12:23:57.258518 containerd[1589]: time="2025-11-04T12:23:57.258489220Z" level=info msg="connecting to shim b38ad2510c9d82496f50e8962dcb662625691ef6dfec9a0ba2a4098e9817ee58" address="unix:///run/containerd/s/cac6e16fc7116f9236fab29bb6d16fbe0abca11158484ecc397fe8a0afa43029" protocol=ttrpc version=3 Nov 4 12:23:57.279430 systemd[1]: Started cri-containerd-b38ad2510c9d82496f50e8962dcb662625691ef6dfec9a0ba2a4098e9817ee58.scope - libcontainer container b38ad2510c9d82496f50e8962dcb662625691ef6dfec9a0ba2a4098e9817ee58. Nov 4 12:23:57.302257 containerd[1589]: time="2025-11-04T12:23:57.301855255Z" level=info msg="StartContainer for \"b38ad2510c9d82496f50e8962dcb662625691ef6dfec9a0ba2a4098e9817ee58\" returns successfully" Nov 4 12:23:57.311082 systemd[1]: cri-containerd-b38ad2510c9d82496f50e8962dcb662625691ef6dfec9a0ba2a4098e9817ee58.scope: Deactivated successfully. Nov 4 12:23:57.314346 containerd[1589]: time="2025-11-04T12:23:57.314313489Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b38ad2510c9d82496f50e8962dcb662625691ef6dfec9a0ba2a4098e9817ee58\" id:\"b38ad2510c9d82496f50e8962dcb662625691ef6dfec9a0ba2a4098e9817ee58\" pid:4602 exited_at:{seconds:1762259037 nanos:313932803}" Nov 4 12:23:57.314414 containerd[1589]: time="2025-11-04T12:23:57.314363410Z" level=info msg="received exit event container_id:\"b38ad2510c9d82496f50e8962dcb662625691ef6dfec9a0ba2a4098e9817ee58\" id:\"b38ad2510c9d82496f50e8962dcb662625691ef6dfec9a0ba2a4098e9817ee58\" pid:4602 exited_at:{seconds:1762259037 nanos:313932803}" Nov 4 12:23:57.778655 kubelet[2735]: I1104 12:23:57.778593 2735 setters.go:543] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-04T12:23:57Z","lastTransitionTime":"2025-11-04T12:23:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 4 12:23:57.930838 kubelet[2735]: E1104 12:23:57.930812 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:57.936415 containerd[1589]: time="2025-11-04T12:23:57.936381766Z" level=info msg="CreateContainer within sandbox \"aed124846ee6e31c79b5672072588422e7882c15daf156721c2fa90af6e05e0b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 4 12:23:57.942519 containerd[1589]: time="2025-11-04T12:23:57.942486541Z" level=info msg="Container 459910dfd0a2e446aa1dfd7cbfb4d3b2e8f7074b6f5a6269263fc4fbd3e2fdda: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:23:57.948413 containerd[1589]: time="2025-11-04T12:23:57.948377552Z" level=info msg="CreateContainer within sandbox \"aed124846ee6e31c79b5672072588422e7882c15daf156721c2fa90af6e05e0b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"459910dfd0a2e446aa1dfd7cbfb4d3b2e8f7074b6f5a6269263fc4fbd3e2fdda\"" Nov 4 12:23:57.948808 containerd[1589]: time="2025-11-04T12:23:57.948786199Z" level=info msg="StartContainer for \"459910dfd0a2e446aa1dfd7cbfb4d3b2e8f7074b6f5a6269263fc4fbd3e2fdda\"" Nov 4 12:23:57.949530 containerd[1589]: time="2025-11-04T12:23:57.949508410Z" level=info msg="connecting to shim 459910dfd0a2e446aa1dfd7cbfb4d3b2e8f7074b6f5a6269263fc4fbd3e2fdda" address="unix:///run/containerd/s/cac6e16fc7116f9236fab29bb6d16fbe0abca11158484ecc397fe8a0afa43029" protocol=ttrpc version=3 Nov 4 12:23:57.973424 systemd[1]: Started cri-containerd-459910dfd0a2e446aa1dfd7cbfb4d3b2e8f7074b6f5a6269263fc4fbd3e2fdda.scope - libcontainer container 459910dfd0a2e446aa1dfd7cbfb4d3b2e8f7074b6f5a6269263fc4fbd3e2fdda. Nov 4 12:23:57.996668 containerd[1589]: time="2025-11-04T12:23:57.996572502Z" level=info msg="StartContainer for \"459910dfd0a2e446aa1dfd7cbfb4d3b2e8f7074b6f5a6269263fc4fbd3e2fdda\" returns successfully" Nov 4 12:23:58.002208 systemd[1]: cri-containerd-459910dfd0a2e446aa1dfd7cbfb4d3b2e8f7074b6f5a6269263fc4fbd3e2fdda.scope: Deactivated successfully. Nov 4 12:23:58.002750 containerd[1589]: time="2025-11-04T12:23:58.002711277Z" level=info msg="received exit event container_id:\"459910dfd0a2e446aa1dfd7cbfb4d3b2e8f7074b6f5a6269263fc4fbd3e2fdda\" id:\"459910dfd0a2e446aa1dfd7cbfb4d3b2e8f7074b6f5a6269263fc4fbd3e2fdda\" pid:4648 exited_at:{seconds:1762259038 nanos:2425633}" Nov 4 12:23:58.002906 containerd[1589]: time="2025-11-04T12:23:58.002887880Z" level=info msg="TaskExit event in podsandbox handler container_id:\"459910dfd0a2e446aa1dfd7cbfb4d3b2e8f7074b6f5a6269263fc4fbd3e2fdda\" id:\"459910dfd0a2e446aa1dfd7cbfb4d3b2e8f7074b6f5a6269263fc4fbd3e2fdda\" pid:4648 exited_at:{seconds:1762259038 nanos:2425633}" Nov 4 12:23:58.935161 kubelet[2735]: E1104 12:23:58.935131 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:58.943784 containerd[1589]: time="2025-11-04T12:23:58.943747795Z" level=info msg="CreateContainer within sandbox \"aed124846ee6e31c79b5672072588422e7882c15daf156721c2fa90af6e05e0b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 4 12:23:58.956878 containerd[1589]: time="2025-11-04T12:23:58.956837193Z" level=info msg="Container d8af1a96d17085cb50d446409c60ec8ba3aae4e98d7acdfc534e946f433eb47b: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:23:58.971466 containerd[1589]: time="2025-11-04T12:23:58.971419814Z" level=info msg="CreateContainer within sandbox \"aed124846ee6e31c79b5672072588422e7882c15daf156721c2fa90af6e05e0b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d8af1a96d17085cb50d446409c60ec8ba3aae4e98d7acdfc534e946f433eb47b\"" Nov 4 12:23:58.974297 containerd[1589]: time="2025-11-04T12:23:58.973009718Z" level=info msg="StartContainer for \"d8af1a96d17085cb50d446409c60ec8ba3aae4e98d7acdfc534e946f433eb47b\"" Nov 4 12:23:58.974726 containerd[1589]: time="2025-11-04T12:23:58.974680383Z" level=info msg="connecting to shim d8af1a96d17085cb50d446409c60ec8ba3aae4e98d7acdfc534e946f433eb47b" address="unix:///run/containerd/s/cac6e16fc7116f9236fab29bb6d16fbe0abca11158484ecc397fe8a0afa43029" protocol=ttrpc version=3 Nov 4 12:23:59.006433 systemd[1]: Started cri-containerd-d8af1a96d17085cb50d446409c60ec8ba3aae4e98d7acdfc534e946f433eb47b.scope - libcontainer container d8af1a96d17085cb50d446409c60ec8ba3aae4e98d7acdfc534e946f433eb47b. Nov 4 12:23:59.035205 systemd[1]: cri-containerd-d8af1a96d17085cb50d446409c60ec8ba3aae4e98d7acdfc534e946f433eb47b.scope: Deactivated successfully. Nov 4 12:23:59.037062 containerd[1589]: time="2025-11-04T12:23:59.037029512Z" level=info msg="received exit event container_id:\"d8af1a96d17085cb50d446409c60ec8ba3aae4e98d7acdfc534e946f433eb47b\" id:\"d8af1a96d17085cb50d446409c60ec8ba3aae4e98d7acdfc534e946f433eb47b\" pid:4691 exited_at:{seconds:1762259039 nanos:36868149}" Nov 4 12:23:59.037198 containerd[1589]: time="2025-11-04T12:23:59.037163554Z" level=info msg="StartContainer for \"d8af1a96d17085cb50d446409c60ec8ba3aae4e98d7acdfc534e946f433eb47b\" returns successfully" Nov 4 12:23:59.037347 containerd[1589]: time="2025-11-04T12:23:59.037085112Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d8af1a96d17085cb50d446409c60ec8ba3aae4e98d7acdfc534e946f433eb47b\" id:\"d8af1a96d17085cb50d446409c60ec8ba3aae4e98d7acdfc534e946f433eb47b\" pid:4691 exited_at:{seconds:1762259039 nanos:36868149}" Nov 4 12:23:59.055174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d8af1a96d17085cb50d446409c60ec8ba3aae4e98d7acdfc534e946f433eb47b-rootfs.mount: Deactivated successfully. Nov 4 12:23:59.943956 kubelet[2735]: E1104 12:23:59.942656 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:23:59.946514 containerd[1589]: time="2025-11-04T12:23:59.946476257Z" level=info msg="CreateContainer within sandbox \"aed124846ee6e31c79b5672072588422e7882c15daf156721c2fa90af6e05e0b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 4 12:23:59.964367 containerd[1589]: time="2025-11-04T12:23:59.964306239Z" level=info msg="Container bc32412bedc4e020b552abd4e62b32a393fd0d9537bb75888e69fe52a69ede0c: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:23:59.973183 containerd[1589]: time="2025-11-04T12:23:59.973143929Z" level=info msg="CreateContainer within sandbox \"aed124846ee6e31c79b5672072588422e7882c15daf156721c2fa90af6e05e0b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bc32412bedc4e020b552abd4e62b32a393fd0d9537bb75888e69fe52a69ede0c\"" Nov 4 12:23:59.973665 containerd[1589]: time="2025-11-04T12:23:59.973643577Z" level=info msg="StartContainer for \"bc32412bedc4e020b552abd4e62b32a393fd0d9537bb75888e69fe52a69ede0c\"" Nov 4 12:23:59.974599 containerd[1589]: time="2025-11-04T12:23:59.974566870Z" level=info msg="connecting to shim bc32412bedc4e020b552abd4e62b32a393fd0d9537bb75888e69fe52a69ede0c" address="unix:///run/containerd/s/cac6e16fc7116f9236fab29bb6d16fbe0abca11158484ecc397fe8a0afa43029" protocol=ttrpc version=3 Nov 4 12:23:59.995428 systemd[1]: Started cri-containerd-bc32412bedc4e020b552abd4e62b32a393fd0d9537bb75888e69fe52a69ede0c.scope - libcontainer container bc32412bedc4e020b552abd4e62b32a393fd0d9537bb75888e69fe52a69ede0c. Nov 4 12:24:00.015833 containerd[1589]: time="2025-11-04T12:24:00.015262303Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc32412bedc4e020b552abd4e62b32a393fd0d9537bb75888e69fe52a69ede0c\" id:\"bc32412bedc4e020b552abd4e62b32a393fd0d9537bb75888e69fe52a69ede0c\" pid:4730 exited_at:{seconds:1762259040 nanos:15042020}" Nov 4 12:24:00.015311 systemd[1]: cri-containerd-bc32412bedc4e020b552abd4e62b32a393fd0d9537bb75888e69fe52a69ede0c.scope: Deactivated successfully. Nov 4 12:24:00.016723 containerd[1589]: time="2025-11-04T12:24:00.016613443Z" level=info msg="received exit event container_id:\"bc32412bedc4e020b552abd4e62b32a393fd0d9537bb75888e69fe52a69ede0c\" id:\"bc32412bedc4e020b552abd4e62b32a393fd0d9537bb75888e69fe52a69ede0c\" pid:4730 exited_at:{seconds:1762259040 nanos:15042020}" Nov 4 12:24:00.022887 containerd[1589]: time="2025-11-04T12:24:00.022863812Z" level=info msg="StartContainer for \"bc32412bedc4e020b552abd4e62b32a393fd0d9537bb75888e69fe52a69ede0c\" returns successfully" Nov 4 12:24:00.055264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc32412bedc4e020b552abd4e62b32a393fd0d9537bb75888e69fe52a69ede0c-rootfs.mount: Deactivated successfully. Nov 4 12:24:00.700192 kubelet[2735]: E1104 12:24:00.700146 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:24:00.700362 kubelet[2735]: E1104 12:24:00.700144 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:24:00.948262 kubelet[2735]: E1104 12:24:00.948207 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:24:00.954905 containerd[1589]: time="2025-11-04T12:24:00.954813236Z" level=info msg="CreateContainer within sandbox \"aed124846ee6e31c79b5672072588422e7882c15daf156721c2fa90af6e05e0b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 4 12:24:00.973971 containerd[1589]: time="2025-11-04T12:24:00.973938910Z" level=info msg="Container 53c6f6af6f2971f1bdfed829fef69f8814873abd48b9ec4e0348df756d53a980: CDI devices from CRI Config.CDIDevices: []" Nov 4 12:24:00.981693 containerd[1589]: time="2025-11-04T12:24:00.981655981Z" level=info msg="CreateContainer within sandbox \"aed124846ee6e31c79b5672072588422e7882c15daf156721c2fa90af6e05e0b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"53c6f6af6f2971f1bdfed829fef69f8814873abd48b9ec4e0348df756d53a980\"" Nov 4 12:24:00.982325 containerd[1589]: time="2025-11-04T12:24:00.982302550Z" level=info msg="StartContainer for \"53c6f6af6f2971f1bdfed829fef69f8814873abd48b9ec4e0348df756d53a980\"" Nov 4 12:24:00.983221 containerd[1589]: time="2025-11-04T12:24:00.983188562Z" level=info msg="connecting to shim 53c6f6af6f2971f1bdfed829fef69f8814873abd48b9ec4e0348df756d53a980" address="unix:///run/containerd/s/cac6e16fc7116f9236fab29bb6d16fbe0abca11158484ecc397fe8a0afa43029" protocol=ttrpc version=3 Nov 4 12:24:01.002410 systemd[1]: Started cri-containerd-53c6f6af6f2971f1bdfed829fef69f8814873abd48b9ec4e0348df756d53a980.scope - libcontainer container 53c6f6af6f2971f1bdfed829fef69f8814873abd48b9ec4e0348df756d53a980. Nov 4 12:24:01.031620 containerd[1589]: time="2025-11-04T12:24:01.031586524Z" level=info msg="StartContainer for \"53c6f6af6f2971f1bdfed829fef69f8814873abd48b9ec4e0348df756d53a980\" returns successfully" Nov 4 12:24:01.084272 containerd[1589]: time="2025-11-04T12:24:01.084216377Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53c6f6af6f2971f1bdfed829fef69f8814873abd48b9ec4e0348df756d53a980\" id:\"569a071710310059f82cd9de6b64d3498dd08ee50fb0d491b2e8f9e69ed0d161\" pid:4799 exited_at:{seconds:1762259041 nanos:83934493}" Nov 4 12:24:01.305337 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Nov 4 12:24:01.955388 kubelet[2735]: E1104 12:24:01.955360 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:24:01.971295 kubelet[2735]: I1104 12:24:01.971031 2735 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-b7czv" podStartSLOduration=5.971016011 podStartE2EDuration="5.971016011s" podCreationTimestamp="2025-11-04 12:23:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-04 12:24:01.968888662 +0000 UTC m=+75.379179973" watchObservedRunningTime="2025-11-04 12:24:01.971016011 +0000 UTC m=+75.381307322" Nov 4 12:24:03.163805 kubelet[2735]: E1104 12:24:03.163752 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:24:03.557142 containerd[1589]: time="2025-11-04T12:24:03.557093202Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53c6f6af6f2971f1bdfed829fef69f8814873abd48b9ec4e0348df756d53a980\" id:\"4abc19cdedc12638bd4e6f7b9837bc8d436e0a5bf153848a814b21567d337a9e\" pid:5157 exit_status:1 exited_at:{seconds:1762259043 nanos:556707477}" Nov 4 12:24:04.105223 systemd-networkd[1497]: lxc_health: Link UP Nov 4 12:24:04.117474 systemd-networkd[1497]: lxc_health: Gained carrier Nov 4 12:24:05.165649 kubelet[2735]: E1104 12:24:05.165605 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:24:05.303485 systemd-networkd[1497]: lxc_health: Gained IPv6LL Nov 4 12:24:05.671914 containerd[1589]: time="2025-11-04T12:24:05.671876047Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53c6f6af6f2971f1bdfed829fef69f8814873abd48b9ec4e0348df756d53a980\" id:\"d75fd8dedd36fbee7a21c3ca97fda9636033eb0bdeae9c8bc81f610deb4ba84c\" pid:5340 exited_at:{seconds:1762259045 nanos:671417561}" Nov 4 12:24:05.964341 kubelet[2735]: E1104 12:24:05.964207 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:24:06.965594 kubelet[2735]: E1104 12:24:06.965550 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:24:07.790588 containerd[1589]: time="2025-11-04T12:24:07.790549768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53c6f6af6f2971f1bdfed829fef69f8814873abd48b9ec4e0348df756d53a980\" id:\"bc5539ddd06a1dda5b9399d735c69a079657c416288b701d90e704125b1c21d6\" pid:5373 exited_at:{seconds:1762259047 nanos:790049322}" Nov 4 12:24:08.701344 kubelet[2735]: E1104 12:24:08.701316 2735 dns.go:154] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 4 12:24:09.914884 containerd[1589]: time="2025-11-04T12:24:09.914832874Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53c6f6af6f2971f1bdfed829fef69f8814873abd48b9ec4e0348df756d53a980\" id:\"38a57698a631e5ce52dbd36194d3026ab6fb6bbff35cf54406375b135ba76af7\" pid:5398 exited_at:{seconds:1762259049 nanos:913824142}" Nov 4 12:24:09.929526 sshd[4537]: Connection closed by 10.0.0.1 port 54878 Nov 4 12:24:09.930612 sshd-session[4531]: pam_unix(sshd:session): session closed for user core Nov 4 12:24:09.934420 systemd[1]: sshd@24-10.0.0.89:22-10.0.0.1:54878.service: Deactivated successfully. Nov 4 12:24:09.936078 systemd[1]: session-25.scope: Deactivated successfully. Nov 4 12:24:09.936851 systemd-logind[1570]: Session 25 logged out. Waiting for processes to exit. Nov 4 12:24:09.938323 systemd-logind[1570]: Removed session 25.