Nov 8 00:01:02.402830 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 8 00:01:02.402855 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Nov 7 22:24:06 -00 2025 Nov 8 00:01:02.402864 kernel: KASLR enabled Nov 8 00:01:02.402880 kernel: efi: EFI v2.7 by EDK II Nov 8 00:01:02.402887 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Nov 8 00:01:02.402893 kernel: random: crng init done Nov 8 00:01:02.402900 kernel: secureboot: Secure boot disabled Nov 8 00:01:02.402906 kernel: ACPI: Early table checksum verification disabled Nov 8 00:01:02.402915 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Nov 8 00:01:02.402921 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 8 00:01:02.402927 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:02.402933 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:02.402939 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:02.402946 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:02.402954 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:02.402961 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:02.402967 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:02.402974 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:02.402980 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 8 00:01:02.402987 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 8 00:01:02.402993 kernel: ACPI: Use ACPI SPCR as default console: No Nov 8 00:01:02.403000 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 8 00:01:02.403008 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Nov 8 00:01:02.403014 kernel: Zone ranges: Nov 8 00:01:02.403021 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 8 00:01:02.403028 kernel: DMA32 empty Nov 8 00:01:02.403034 kernel: Normal empty Nov 8 00:01:02.403040 kernel: Device empty Nov 8 00:01:02.403047 kernel: Movable zone start for each node Nov 8 00:01:02.403063 kernel: Early memory node ranges Nov 8 00:01:02.403071 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Nov 8 00:01:02.403077 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Nov 8 00:01:02.403084 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Nov 8 00:01:02.403090 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Nov 8 00:01:02.403099 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Nov 8 00:01:02.403105 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Nov 8 00:01:02.403111 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Nov 8 00:01:02.403118 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Nov 8 00:01:02.403124 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Nov 8 00:01:02.403131 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 8 00:01:02.403141 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 8 00:01:02.403148 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 8 00:01:02.403155 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 8 00:01:02.403161 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 8 00:01:02.403168 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 8 00:01:02.403175 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Nov 8 00:01:02.403182 kernel: psci: probing for conduit method from ACPI. Nov 8 00:01:02.403189 kernel: psci: PSCIv1.1 detected in firmware. Nov 8 00:01:02.403197 kernel: psci: Using standard PSCI v0.2 function IDs Nov 8 00:01:02.403204 kernel: psci: Trusted OS migration not required Nov 8 00:01:02.403211 kernel: psci: SMC Calling Convention v1.1 Nov 8 00:01:02.403218 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 8 00:01:02.403225 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 8 00:01:02.403232 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 8 00:01:02.403239 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 8 00:01:02.403247 kernel: Detected PIPT I-cache on CPU0 Nov 8 00:01:02.403254 kernel: CPU features: detected: GIC system register CPU interface Nov 8 00:01:02.403260 kernel: CPU features: detected: Spectre-v4 Nov 8 00:01:02.403267 kernel: CPU features: detected: Spectre-BHB Nov 8 00:01:02.403276 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 8 00:01:02.403283 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 8 00:01:02.403289 kernel: CPU features: detected: ARM erratum 1418040 Nov 8 00:01:02.403296 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 8 00:01:02.403303 kernel: alternatives: applying boot alternatives Nov 8 00:01:02.403311 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8bfefa4d5bf8d825e537335d2d0fa0f6d70ecdd5bfc7a28e4bcd37bbf7abce90 Nov 8 00:01:02.403318 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 8 00:01:02.403325 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 8 00:01:02.403332 kernel: Fallback order for Node 0: 0 Nov 8 00:01:02.403339 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 8 00:01:02.403346 kernel: Policy zone: DMA Nov 8 00:01:02.403353 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 8 00:01:02.403360 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 8 00:01:02.403367 kernel: software IO TLB: area num 4. Nov 8 00:01:02.403374 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 8 00:01:02.403388 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Nov 8 00:01:02.403396 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 8 00:01:02.403402 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 8 00:01:02.403410 kernel: rcu: RCU event tracing is enabled. Nov 8 00:01:02.403417 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 8 00:01:02.403424 kernel: Trampoline variant of Tasks RCU enabled. Nov 8 00:01:02.403432 kernel: Tracing variant of Tasks RCU enabled. Nov 8 00:01:02.403439 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 8 00:01:02.403446 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 8 00:01:02.403453 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:01:02.403460 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 8 00:01:02.403467 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 8 00:01:02.403474 kernel: GICv3: 256 SPIs implemented Nov 8 00:01:02.403481 kernel: GICv3: 0 Extended SPIs implemented Nov 8 00:01:02.403488 kernel: Root IRQ handler: gic_handle_irq Nov 8 00:01:02.403494 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 8 00:01:02.403501 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 8 00:01:02.403509 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 8 00:01:02.403516 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 8 00:01:02.403523 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 8 00:01:02.403530 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 8 00:01:02.403537 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 8 00:01:02.403552 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 8 00:01:02.403559 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 8 00:01:02.403566 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:01:02.403573 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 8 00:01:02.403580 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 8 00:01:02.403587 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 8 00:01:02.403595 kernel: arm-pv: using stolen time PV Nov 8 00:01:02.403603 kernel: Console: colour dummy device 80x25 Nov 8 00:01:02.403610 kernel: ACPI: Core revision 20240827 Nov 8 00:01:02.403618 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 8 00:01:02.403626 kernel: pid_max: default: 32768 minimum: 301 Nov 8 00:01:02.403633 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 8 00:01:02.403640 kernel: landlock: Up and running. Nov 8 00:01:02.403647 kernel: SELinux: Initializing. Nov 8 00:01:02.403656 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:01:02.403663 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 8 00:01:02.403671 kernel: rcu: Hierarchical SRCU implementation. Nov 8 00:01:02.403678 kernel: rcu: Max phase no-delay instances is 400. Nov 8 00:01:02.403686 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 8 00:01:02.403693 kernel: Remapping and enabling EFI services. Nov 8 00:01:02.403700 kernel: smp: Bringing up secondary CPUs ... Nov 8 00:01:02.403709 kernel: Detected PIPT I-cache on CPU1 Nov 8 00:01:02.403721 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 8 00:01:02.403730 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 8 00:01:02.403738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:01:02.403745 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 8 00:01:02.403753 kernel: Detected PIPT I-cache on CPU2 Nov 8 00:01:02.403761 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 8 00:01:02.403770 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 8 00:01:02.403778 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:01:02.403786 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 8 00:01:02.403794 kernel: Detected PIPT I-cache on CPU3 Nov 8 00:01:02.403801 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 8 00:01:02.403809 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 8 00:01:02.403817 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 8 00:01:02.403826 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 8 00:01:02.403833 kernel: smp: Brought up 1 node, 4 CPUs Nov 8 00:01:02.403841 kernel: SMP: Total of 4 processors activated. Nov 8 00:01:02.403848 kernel: CPU: All CPU(s) started at EL1 Nov 8 00:01:02.403856 kernel: CPU features: detected: 32-bit EL0 Support Nov 8 00:01:02.403863 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 8 00:01:02.403875 kernel: CPU features: detected: Common not Private translations Nov 8 00:01:02.403885 kernel: CPU features: detected: CRC32 instructions Nov 8 00:01:02.403893 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 8 00:01:02.403901 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 8 00:01:02.403909 kernel: CPU features: detected: LSE atomic instructions Nov 8 00:01:02.403916 kernel: CPU features: detected: Privileged Access Never Nov 8 00:01:02.403924 kernel: CPU features: detected: RAS Extension Support Nov 8 00:01:02.403932 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 8 00:01:02.403939 kernel: alternatives: applying system-wide alternatives Nov 8 00:01:02.403949 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 8 00:01:02.403957 kernel: Memory: 2450272K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 13120K init, 1038K bss, 99680K reserved, 16384K cma-reserved) Nov 8 00:01:02.403965 kernel: devtmpfs: initialized Nov 8 00:01:02.403973 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 8 00:01:02.403981 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 8 00:01:02.403989 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 8 00:01:02.403996 kernel: 0 pages in range for non-PLT usage Nov 8 00:01:02.404005 kernel: 515024 pages in range for PLT usage Nov 8 00:01:02.404013 kernel: pinctrl core: initialized pinctrl subsystem Nov 8 00:01:02.404020 kernel: SMBIOS 3.0.0 present. Nov 8 00:01:02.404028 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 8 00:01:02.404036 kernel: DMI: Memory slots populated: 1/1 Nov 8 00:01:02.404043 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 8 00:01:02.404050 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 8 00:01:02.404067 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 8 00:01:02.404075 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 8 00:01:02.404083 kernel: audit: initializing netlink subsys (disabled) Nov 8 00:01:02.404090 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Nov 8 00:01:02.404098 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 8 00:01:02.404105 kernel: cpuidle: using governor menu Nov 8 00:01:02.404113 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 8 00:01:02.404122 kernel: ASID allocator initialised with 32768 entries Nov 8 00:01:02.404130 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 8 00:01:02.404138 kernel: Serial: AMBA PL011 UART driver Nov 8 00:01:02.404145 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 8 00:01:02.404154 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 8 00:01:02.404161 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 8 00:01:02.404169 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 8 00:01:02.404176 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 8 00:01:02.404185 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 8 00:01:02.404193 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 8 00:01:02.404200 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 8 00:01:02.404208 kernel: ACPI: Added _OSI(Module Device) Nov 8 00:01:02.404215 kernel: ACPI: Added _OSI(Processor Device) Nov 8 00:01:02.404223 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 8 00:01:02.404230 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 8 00:01:02.404239 kernel: ACPI: Interpreter enabled Nov 8 00:01:02.404247 kernel: ACPI: Using GIC for interrupt routing Nov 8 00:01:02.404254 kernel: ACPI: MCFG table detected, 1 entries Nov 8 00:01:02.404262 kernel: ACPI: CPU0 has been hot-added Nov 8 00:01:02.404270 kernel: ACPI: CPU1 has been hot-added Nov 8 00:01:02.404277 kernel: ACPI: CPU2 has been hot-added Nov 8 00:01:02.404285 kernel: ACPI: CPU3 has been hot-added Nov 8 00:01:02.404294 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 8 00:01:02.404302 kernel: printk: legacy console [ttyAMA0] enabled Nov 8 00:01:02.404310 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 8 00:01:02.404476 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 8 00:01:02.404565 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 8 00:01:02.404648 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 8 00:01:02.404731 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 8 00:01:02.404812 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 8 00:01:02.404822 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 8 00:01:02.404830 kernel: PCI host bridge to bus 0000:00 Nov 8 00:01:02.404930 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 8 00:01:02.405006 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 8 00:01:02.407175 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 8 00:01:02.407314 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 8 00:01:02.407431 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 8 00:01:02.407534 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 8 00:01:02.407643 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 8 00:01:02.407726 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 8 00:01:02.407817 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 8 00:01:02.407913 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 8 00:01:02.407997 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 8 00:01:02.408816 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 8 00:01:02.409798 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 8 00:01:02.409903 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 8 00:01:02.409992 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 8 00:01:02.410003 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 8 00:01:02.410011 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 8 00:01:02.410019 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 8 00:01:02.410027 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 8 00:01:02.410035 kernel: iommu: Default domain type: Translated Nov 8 00:01:02.410046 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 8 00:01:02.410069 kernel: efivars: Registered efivars operations Nov 8 00:01:02.410078 kernel: vgaarb: loaded Nov 8 00:01:02.410086 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 8 00:01:02.410093 kernel: VFS: Disk quotas dquot_6.6.0 Nov 8 00:01:02.410101 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 8 00:01:02.410109 kernel: pnp: PnP ACPI init Nov 8 00:01:02.410221 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 8 00:01:02.410234 kernel: pnp: PnP ACPI: found 1 devices Nov 8 00:01:02.410242 kernel: NET: Registered PF_INET protocol family Nov 8 00:01:02.410250 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 8 00:01:02.410258 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 8 00:01:02.410266 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 8 00:01:02.410275 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 8 00:01:02.410285 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 8 00:01:02.410293 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 8 00:01:02.410300 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:01:02.410308 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 8 00:01:02.410316 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 8 00:01:02.410323 kernel: PCI: CLS 0 bytes, default 64 Nov 8 00:01:02.410331 kernel: kvm [1]: HYP mode not available Nov 8 00:01:02.410340 kernel: Initialise system trusted keyrings Nov 8 00:01:02.410348 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 8 00:01:02.410356 kernel: Key type asymmetric registered Nov 8 00:01:02.410364 kernel: Asymmetric key parser 'x509' registered Nov 8 00:01:02.410371 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 8 00:01:02.410379 kernel: io scheduler mq-deadline registered Nov 8 00:01:02.410387 kernel: io scheduler kyber registered Nov 8 00:01:02.410396 kernel: io scheduler bfq registered Nov 8 00:01:02.410404 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 8 00:01:02.410412 kernel: ACPI: button: Power Button [PWRB] Nov 8 00:01:02.410421 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 8 00:01:02.410509 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 8 00:01:02.410520 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 8 00:01:02.410528 kernel: thunder_xcv, ver 1.0 Nov 8 00:01:02.410538 kernel: thunder_bgx, ver 1.0 Nov 8 00:01:02.410545 kernel: nicpf, ver 1.0 Nov 8 00:01:02.410553 kernel: nicvf, ver 1.0 Nov 8 00:01:02.410645 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 8 00:01:02.410723 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-08T00:01:01 UTC (1762560061) Nov 8 00:01:02.410733 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 8 00:01:02.410743 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 8 00:01:02.410751 kernel: watchdog: NMI not fully supported Nov 8 00:01:02.410758 kernel: watchdog: Hard watchdog permanently disabled Nov 8 00:01:02.410766 kernel: NET: Registered PF_INET6 protocol family Nov 8 00:01:02.410773 kernel: Segment Routing with IPv6 Nov 8 00:01:02.410781 kernel: In-situ OAM (IOAM) with IPv6 Nov 8 00:01:02.410789 kernel: NET: Registered PF_PACKET protocol family Nov 8 00:01:02.410796 kernel: Key type dns_resolver registered Nov 8 00:01:02.410806 kernel: registered taskstats version 1 Nov 8 00:01:02.410815 kernel: Loading compiled-in X.509 certificates Nov 8 00:01:02.410823 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: ebe7e9737da4c34f192c530d79f3cb246d03fd74' Nov 8 00:01:02.410831 kernel: Demotion targets for Node 0: null Nov 8 00:01:02.410839 kernel: Key type .fscrypt registered Nov 8 00:01:02.410847 kernel: Key type fscrypt-provisioning registered Nov 8 00:01:02.410855 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 8 00:01:02.410864 kernel: ima: Allocated hash algorithm: sha1 Nov 8 00:01:02.410883 kernel: ima: No architecture policies found Nov 8 00:01:02.410891 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 8 00:01:02.410899 kernel: clk: Disabling unused clocks Nov 8 00:01:02.410907 kernel: PM: genpd: Disabling unused power domains Nov 8 00:01:02.410915 kernel: Freeing unused kernel memory: 13120K Nov 8 00:01:02.410922 kernel: Run /init as init process Nov 8 00:01:02.410932 kernel: with arguments: Nov 8 00:01:02.410940 kernel: /init Nov 8 00:01:02.410947 kernel: with environment: Nov 8 00:01:02.410954 kernel: HOME=/ Nov 8 00:01:02.410962 kernel: TERM=linux Nov 8 00:01:02.411095 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 8 00:01:02.411183 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 8 00:01:02.411196 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 8 00:01:02.411204 kernel: GPT:16515071 != 27000831 Nov 8 00:01:02.411212 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 8 00:01:02.411219 kernel: GPT:16515071 != 27000831 Nov 8 00:01:02.411227 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 8 00:01:02.411235 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 8 00:01:02.411244 kernel: SCSI subsystem initialized Nov 8 00:01:02.411252 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 8 00:01:02.411261 kernel: device-mapper: uevent: version 1.0.3 Nov 8 00:01:02.411268 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 8 00:01:02.411276 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 8 00:01:02.411284 kernel: raid6: neonx8 gen() 15780 MB/s Nov 8 00:01:02.411291 kernel: raid6: neonx4 gen() 15780 MB/s Nov 8 00:01:02.411301 kernel: raid6: neonx2 gen() 13218 MB/s Nov 8 00:01:02.411308 kernel: raid6: neonx1 gen() 10502 MB/s Nov 8 00:01:02.411316 kernel: raid6: int64x8 gen() 6892 MB/s Nov 8 00:01:02.411324 kernel: raid6: int64x4 gen() 7340 MB/s Nov 8 00:01:02.411332 kernel: raid6: int64x2 gen() 6099 MB/s Nov 8 00:01:02.411339 kernel: raid6: int64x1 gen() 5041 MB/s Nov 8 00:01:02.411347 kernel: raid6: using algorithm neonx8 gen() 15780 MB/s Nov 8 00:01:02.411356 kernel: raid6: .... xor() 12054 MB/s, rmw enabled Nov 8 00:01:02.411364 kernel: raid6: using neon recovery algorithm Nov 8 00:01:02.411371 kernel: xor: measuring software checksum speed Nov 8 00:01:02.411379 kernel: 8regs : 20629 MB/sec Nov 8 00:01:02.411386 kernel: 32regs : 21630 MB/sec Nov 8 00:01:02.411394 kernel: arm64_neon : 26356 MB/sec Nov 8 00:01:02.411402 kernel: xor: using function: arm64_neon (26356 MB/sec) Nov 8 00:01:02.411409 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 8 00:01:02.411419 kernel: BTRFS: device fsid 55631b0a-1ca9-4494-9c87-5a8b2623813a devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (204) Nov 8 00:01:02.411427 kernel: BTRFS info (device dm-0): first mount of filesystem 55631b0a-1ca9-4494-9c87-5a8b2623813a Nov 8 00:01:02.411435 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:01:02.411442 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 8 00:01:02.411450 kernel: BTRFS info (device dm-0): enabling free space tree Nov 8 00:01:02.411458 kernel: loop: module loaded Nov 8 00:01:02.411466 kernel: loop0: detected capacity change from 0 to 91464 Nov 8 00:01:02.411475 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 8 00:01:02.411485 systemd[1]: Successfully made /usr/ read-only. Nov 8 00:01:02.411497 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 8 00:01:02.411505 systemd[1]: Detected virtualization kvm. Nov 8 00:01:02.411513 systemd[1]: Detected architecture arm64. Nov 8 00:01:02.411523 systemd[1]: Running in initrd. Nov 8 00:01:02.411531 systemd[1]: No hostname configured, using default hostname. Nov 8 00:01:02.411540 systemd[1]: Hostname set to . Nov 8 00:01:02.411548 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 8 00:01:02.411556 systemd[1]: Queued start job for default target initrd.target. Nov 8 00:01:02.411564 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:01:02.411573 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:01:02.411582 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:01:02.411591 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 8 00:01:02.411599 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:01:02.411608 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 8 00:01:02.411617 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 8 00:01:02.411627 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:01:02.411635 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:01:02.411643 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 8 00:01:02.411651 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:01:02.411659 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:01:02.411668 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:01:02.411676 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:01:02.411685 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:01:02.411694 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:01:02.411702 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 8 00:01:02.411711 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 8 00:01:02.411729 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:01:02.411740 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:01:02.411749 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:01:02.411758 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:01:02.411769 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 8 00:01:02.411777 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 8 00:01:02.411786 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:01:02.411795 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 8 00:01:02.411805 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 8 00:01:02.411814 systemd[1]: Starting systemd-fsck-usr.service... Nov 8 00:01:02.411822 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:01:02.411831 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:01:02.411839 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:01:02.411849 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 8 00:01:02.411858 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:01:02.411867 systemd[1]: Finished systemd-fsck-usr.service. Nov 8 00:01:02.411884 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 8 00:01:02.411915 systemd-journald[346]: Collecting audit messages is disabled. Nov 8 00:01:02.411938 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 8 00:01:02.411946 kernel: Bridge firewalling registered Nov 8 00:01:02.411955 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:01:02.411966 systemd-journald[346]: Journal started Nov 8 00:01:02.411985 systemd-journald[346]: Runtime Journal (/run/log/journal/62ce70b3ba2a4d569e2a2e6a1cb72174) is 6M, max 48.5M, 42.4M free. Nov 8 00:01:02.412030 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:01:02.406029 systemd-modules-load[347]: Inserted module 'br_netfilter' Nov 8 00:01:02.417613 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:01:02.425313 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 8 00:01:02.428271 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:01:02.432921 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 8 00:01:02.436197 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:01:02.439190 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:01:02.443338 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:01:02.447964 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:01:02.452015 systemd-tmpfiles[369]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 8 00:01:02.457108 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:01:02.461789 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:01:02.464377 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:01:02.467508 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 8 00:01:02.490016 systemd-resolved[376]: Positive Trust Anchors: Nov 8 00:01:02.490034 systemd-resolved[376]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:01:02.490037 systemd-resolved[376]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 8 00:01:02.490089 systemd-resolved[376]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:01:02.503816 dracut-cmdline[391]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8bfefa4d5bf8d825e537335d2d0fa0f6d70ecdd5bfc7a28e4bcd37bbf7abce90 Nov 8 00:01:02.514012 systemd-resolved[376]: Defaulting to hostname 'linux'. Nov 8 00:01:02.515088 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:01:02.516736 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:01:02.575083 kernel: Loading iSCSI transport class v2.0-870. Nov 8 00:01:02.584099 kernel: iscsi: registered transport (tcp) Nov 8 00:01:02.597239 kernel: iscsi: registered transport (qla4xxx) Nov 8 00:01:02.597286 kernel: QLogic iSCSI HBA Driver Nov 8 00:01:02.617281 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:01:02.636119 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:01:02.638365 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:01:02.684345 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 8 00:01:02.686812 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 8 00:01:02.688439 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 8 00:01:02.729513 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:01:02.735229 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:01:02.771544 systemd-udevd[633]: Using default interface naming scheme 'v257'. Nov 8 00:01:02.779329 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:01:02.782252 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 8 00:01:02.810200 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:01:02.812953 dracut-pre-trigger[700]: rd.md=0: removing MD RAID activation Nov 8 00:01:02.813319 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:01:02.840149 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:01:02.842559 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:01:02.860645 systemd-networkd[744]: lo: Link UP Nov 8 00:01:02.860653 systemd-networkd[744]: lo: Gained carrier Nov 8 00:01:02.861600 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:01:02.864552 systemd[1]: Reached target network.target - Network. Nov 8 00:01:02.900340 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:01:02.902861 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 8 00:01:02.958370 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 8 00:01:02.968458 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 8 00:01:02.975789 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 8 00:01:02.983652 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:01:02.986264 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 8 00:01:03.000968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:01:03.001259 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:01:03.004772 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:01:03.008701 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:01:03.011488 disk-uuid[805]: Primary Header is updated. Nov 8 00:01:03.011488 disk-uuid[805]: Secondary Entries is updated. Nov 8 00:01:03.011488 disk-uuid[805]: Secondary Header is updated. Nov 8 00:01:03.017188 systemd-networkd[744]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 8 00:01:03.017200 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:01:03.018678 systemd-networkd[744]: eth0: Link UP Nov 8 00:01:03.018831 systemd-networkd[744]: eth0: Gained carrier Nov 8 00:01:03.018840 systemd-networkd[744]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 8 00:01:03.035142 systemd-networkd[744]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:01:03.045407 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:01:03.077831 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 8 00:01:03.079314 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:01:03.081526 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:01:03.084093 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:01:03.087308 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 8 00:01:03.123114 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:01:04.045984 disk-uuid[808]: Warning: The kernel is still using the old partition table. Nov 8 00:01:04.045984 disk-uuid[808]: The new table will be used at the next reboot or after you Nov 8 00:01:04.045984 disk-uuid[808]: run partprobe(8) or kpartx(8) Nov 8 00:01:04.045984 disk-uuid[808]: The operation has completed successfully. Nov 8 00:01:04.055230 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 8 00:01:04.056478 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 8 00:01:04.058997 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 8 00:01:04.091089 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Nov 8 00:01:04.091139 kernel: BTRFS info (device vda6): first mount of filesystem c876c121-698c-4fc0-9477-04b409cf288e Nov 8 00:01:04.093400 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:01:04.096701 kernel: BTRFS info (device vda6): turning on async discard Nov 8 00:01:04.096741 kernel: BTRFS info (device vda6): enabling free space tree Nov 8 00:01:04.103072 kernel: BTRFS info (device vda6): last unmount of filesystem c876c121-698c-4fc0-9477-04b409cf288e Nov 8 00:01:04.103910 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 8 00:01:04.106756 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 8 00:01:04.218553 ignition[857]: Ignition 2.22.0 Nov 8 00:01:04.218566 ignition[857]: Stage: fetch-offline Nov 8 00:01:04.218607 ignition[857]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:01:04.218618 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:01:04.218709 ignition[857]: parsed url from cmdline: "" Nov 8 00:01:04.218712 ignition[857]: no config URL provided Nov 8 00:01:04.218718 ignition[857]: reading system config file "/usr/lib/ignition/user.ign" Nov 8 00:01:04.218726 ignition[857]: no config at "/usr/lib/ignition/user.ign" Nov 8 00:01:04.218775 ignition[857]: op(1): [started] loading QEMU firmware config module Nov 8 00:01:04.218780 ignition[857]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 8 00:01:04.229998 ignition[857]: op(1): [finished] loading QEMU firmware config module Nov 8 00:01:04.272822 ignition[857]: parsing config with SHA512: f4fdbc2e89f64569ecfdeeb6b572bbbf187def4127f8fa37d48dee4f6132dfddf6925ea7dffbddefdc12dbe23a5751bb4dfabd4d2f1ad09a061f20af96f7524e Nov 8 00:01:04.277500 unknown[857]: fetched base config from "system" Nov 8 00:01:04.277515 unknown[857]: fetched user config from "qemu" Nov 8 00:01:04.277862 ignition[857]: fetch-offline: fetch-offline passed Nov 8 00:01:04.277930 ignition[857]: Ignition finished successfully Nov 8 00:01:04.281892 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:01:04.284365 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 8 00:01:04.285245 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 8 00:01:04.323944 ignition[871]: Ignition 2.22.0 Nov 8 00:01:04.323963 ignition[871]: Stage: kargs Nov 8 00:01:04.324142 ignition[871]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:01:04.324151 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:01:04.327333 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 8 00:01:04.325099 ignition[871]: kargs: kargs passed Nov 8 00:01:04.325150 ignition[871]: Ignition finished successfully Nov 8 00:01:04.330463 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 8 00:01:04.372768 ignition[879]: Ignition 2.22.0 Nov 8 00:01:04.372787 ignition[879]: Stage: disks Nov 8 00:01:04.372931 ignition[879]: no configs at "/usr/lib/ignition/base.d" Nov 8 00:01:04.376080 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 8 00:01:04.372939 ignition[879]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:01:04.377322 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 8 00:01:04.373650 ignition[879]: disks: disks passed Nov 8 00:01:04.379127 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 8 00:01:04.373694 ignition[879]: Ignition finished successfully Nov 8 00:01:04.381203 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:01:04.383104 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:01:04.384792 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:01:04.387645 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 8 00:01:04.426502 systemd-fsck[889]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 8 00:01:04.430510 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 8 00:01:04.433456 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 8 00:01:04.509096 kernel: EXT4-fs (vda9): mounted filesystem 12d1c98d-1cd5-4af6-bfe4-c8600a1c2a61 r/w with ordered data mode. Quota mode: none. Nov 8 00:01:04.509854 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 8 00:01:04.511337 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 8 00:01:04.514186 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:01:04.516280 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 8 00:01:04.517515 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 8 00:01:04.517563 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 8 00:01:04.517592 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:01:04.531886 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 8 00:01:04.534929 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 8 00:01:04.541191 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (897) Nov 8 00:01:04.541214 kernel: BTRFS info (device vda6): first mount of filesystem c876c121-698c-4fc0-9477-04b409cf288e Nov 8 00:01:04.541225 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:01:04.544165 kernel: BTRFS info (device vda6): turning on async discard Nov 8 00:01:04.544250 kernel: BTRFS info (device vda6): enabling free space tree Nov 8 00:01:04.545195 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:01:04.574654 initrd-setup-root[921]: cut: /sysroot/etc/passwd: No such file or directory Nov 8 00:01:04.579723 initrd-setup-root[928]: cut: /sysroot/etc/group: No such file or directory Nov 8 00:01:04.583192 initrd-setup-root[935]: cut: /sysroot/etc/shadow: No such file or directory Nov 8 00:01:04.586512 initrd-setup-root[942]: cut: /sysroot/etc/gshadow: No such file or directory Nov 8 00:01:04.588246 systemd-networkd[744]: eth0: Gained IPv6LL Nov 8 00:01:04.662015 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 8 00:01:04.664702 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 8 00:01:04.666601 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 8 00:01:04.685822 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 8 00:01:04.687650 kernel: BTRFS info (device vda6): last unmount of filesystem c876c121-698c-4fc0-9477-04b409cf288e Nov 8 00:01:04.704205 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 8 00:01:04.719751 ignition[1011]: INFO : Ignition 2.22.0 Nov 8 00:01:04.719751 ignition[1011]: INFO : Stage: mount Nov 8 00:01:04.721852 ignition[1011]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:01:04.721852 ignition[1011]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:01:04.721852 ignition[1011]: INFO : mount: mount passed Nov 8 00:01:04.721852 ignition[1011]: INFO : Ignition finished successfully Nov 8 00:01:04.723549 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 8 00:01:04.726336 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 8 00:01:05.511417 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 8 00:01:05.543070 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1023) Nov 8 00:01:05.543116 kernel: BTRFS info (device vda6): first mount of filesystem c876c121-698c-4fc0-9477-04b409cf288e Nov 8 00:01:05.544259 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 8 00:01:05.547076 kernel: BTRFS info (device vda6): turning on async discard Nov 8 00:01:05.547101 kernel: BTRFS info (device vda6): enabling free space tree Nov 8 00:01:05.548508 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 8 00:01:05.578728 ignition[1041]: INFO : Ignition 2.22.0 Nov 8 00:01:05.578728 ignition[1041]: INFO : Stage: files Nov 8 00:01:05.580389 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:01:05.580389 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:01:05.580389 ignition[1041]: DEBUG : files: compiled without relabeling support, skipping Nov 8 00:01:05.583545 ignition[1041]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 8 00:01:05.583545 ignition[1041]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 8 00:01:05.583545 ignition[1041]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 8 00:01:05.587762 ignition[1041]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 8 00:01:05.587762 ignition[1041]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 8 00:01:05.587762 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 8 00:01:05.587762 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 8 00:01:05.584031 unknown[1041]: wrote ssh authorized keys file for user: core Nov 8 00:01:05.636792 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 8 00:01:05.772332 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 8 00:01:05.772332 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 8 00:01:05.776074 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 8 00:01:05.966985 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 8 00:01:06.071606 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 8 00:01:06.071606 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 8 00:01:06.075751 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 8 00:01:06.075751 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:01:06.075751 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 8 00:01:06.075751 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:01:06.075751 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 8 00:01:06.075751 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:01:06.075751 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 8 00:01:06.075751 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:01:06.075751 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 8 00:01:06.075751 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:01:06.075751 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:01:06.075751 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:01:06.075751 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 8 00:01:06.384770 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 8 00:01:06.656298 ignition[1041]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 8 00:01:06.656298 ignition[1041]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 8 00:01:06.661184 ignition[1041]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:01:06.663266 ignition[1041]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 8 00:01:06.663266 ignition[1041]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 8 00:01:06.663266 ignition[1041]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 8 00:01:06.663266 ignition[1041]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:01:06.663266 ignition[1041]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 8 00:01:06.663266 ignition[1041]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 8 00:01:06.663266 ignition[1041]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 8 00:01:06.687464 ignition[1041]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:01:06.691439 ignition[1041]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 8 00:01:06.693046 ignition[1041]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 8 00:01:06.693046 ignition[1041]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 8 00:01:06.693046 ignition[1041]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 8 00:01:06.693046 ignition[1041]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:01:06.693046 ignition[1041]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 8 00:01:06.693046 ignition[1041]: INFO : files: files passed Nov 8 00:01:06.693046 ignition[1041]: INFO : Ignition finished successfully Nov 8 00:01:06.693800 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 8 00:01:06.697025 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 8 00:01:06.699477 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 8 00:01:06.714758 initrd-setup-root-after-ignition[1070]: grep: /sysroot/oem/oem-release: No such file or directory Nov 8 00:01:06.707364 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 8 00:01:06.717525 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:01:06.717525 initrd-setup-root-after-ignition[1073]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:01:06.707450 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 8 00:01:06.726333 initrd-setup-root-after-ignition[1077]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 8 00:01:06.717139 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:01:06.719282 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 8 00:01:06.722927 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 8 00:01:06.766136 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 8 00:01:06.766276 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 8 00:01:06.768771 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 8 00:01:06.770818 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 8 00:01:06.773090 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 8 00:01:06.774204 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 8 00:01:06.808379 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:01:06.811109 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 8 00:01:06.830611 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 8 00:01:06.830815 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:01:06.833161 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:01:06.835437 systemd[1]: Stopped target timers.target - Timer Units. Nov 8 00:01:06.837302 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 8 00:01:06.837446 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 8 00:01:06.840198 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 8 00:01:06.842223 systemd[1]: Stopped target basic.target - Basic System. Nov 8 00:01:06.844007 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 8 00:01:06.845983 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 8 00:01:06.848257 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 8 00:01:06.850397 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 8 00:01:06.852479 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 8 00:01:06.854545 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 8 00:01:06.856557 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 8 00:01:06.858523 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 8 00:01:06.860372 systemd[1]: Stopped target swap.target - Swaps. Nov 8 00:01:06.862017 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 8 00:01:06.862165 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 8 00:01:06.864751 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:01:06.866915 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:01:06.869119 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 8 00:01:06.869257 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:01:06.871444 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 8 00:01:06.871566 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 8 00:01:06.874450 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 8 00:01:06.874585 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 8 00:01:06.876657 systemd[1]: Stopped target paths.target - Path Units. Nov 8 00:01:06.878183 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 8 00:01:06.878299 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:01:06.880329 systemd[1]: Stopped target slices.target - Slice Units. Nov 8 00:01:06.882165 systemd[1]: Stopped target sockets.target - Socket Units. Nov 8 00:01:06.883849 systemd[1]: iscsid.socket: Deactivated successfully. Nov 8 00:01:06.883950 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 8 00:01:06.885686 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 8 00:01:06.885769 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 8 00:01:06.887987 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 8 00:01:06.888115 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 8 00:01:06.890006 systemd[1]: ignition-files.service: Deactivated successfully. Nov 8 00:01:06.890126 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 8 00:01:06.892683 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 8 00:01:06.894179 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 8 00:01:06.894315 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:01:06.896930 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 8 00:01:06.898883 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 8 00:01:06.899022 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:01:06.901453 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 8 00:01:06.901561 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:01:06.903374 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 8 00:01:06.903482 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 8 00:01:06.911114 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 8 00:01:06.913099 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 8 00:01:06.918470 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 8 00:01:06.925842 ignition[1098]: INFO : Ignition 2.22.0 Nov 8 00:01:06.925842 ignition[1098]: INFO : Stage: umount Nov 8 00:01:06.927473 ignition[1098]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 8 00:01:06.927473 ignition[1098]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 8 00:01:06.929567 ignition[1098]: INFO : umount: umount passed Nov 8 00:01:06.929567 ignition[1098]: INFO : Ignition finished successfully Nov 8 00:01:06.930591 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 8 00:01:06.930710 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 8 00:01:06.932255 systemd[1]: Stopped target network.target - Network. Nov 8 00:01:06.933859 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 8 00:01:06.933938 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 8 00:01:06.935908 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 8 00:01:06.935969 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 8 00:01:06.937788 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 8 00:01:06.937842 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 8 00:01:06.939683 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 8 00:01:06.939733 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 8 00:01:06.941627 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 8 00:01:06.943405 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 8 00:01:06.952275 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 8 00:01:06.953316 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 8 00:01:06.957554 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 8 00:01:06.957657 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 8 00:01:06.961466 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 8 00:01:06.962717 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 8 00:01:06.962760 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:01:06.965660 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 8 00:01:06.966782 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 8 00:01:06.966855 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 8 00:01:06.969194 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:01:06.969242 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:01:06.971213 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 8 00:01:06.971258 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 8 00:01:06.973247 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:01:06.977118 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 8 00:01:06.977207 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 8 00:01:06.978889 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 8 00:01:06.978978 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 8 00:01:06.987637 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 8 00:01:06.989088 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:01:06.990557 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 8 00:01:06.990594 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 8 00:01:06.992433 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 8 00:01:06.992464 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:01:06.994296 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 8 00:01:06.994350 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 8 00:01:06.997037 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 8 00:01:06.997118 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 8 00:01:06.999955 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 8 00:01:07.000007 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 8 00:01:07.003610 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 8 00:01:07.004907 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 8 00:01:07.004970 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:01:07.007227 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 8 00:01:07.007274 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:01:07.009606 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 8 00:01:07.009657 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:01:07.012385 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 8 00:01:07.012464 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 8 00:01:07.016305 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 8 00:01:07.016394 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 8 00:01:07.018843 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 8 00:01:07.021323 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 8 00:01:07.041818 systemd[1]: Switching root. Nov 8 00:01:07.086595 systemd-journald[346]: Journal stopped Nov 8 00:01:07.973296 systemd-journald[346]: Received SIGTERM from PID 1 (systemd). Nov 8 00:01:07.973350 kernel: SELinux: policy capability network_peer_controls=1 Nov 8 00:01:07.973364 kernel: SELinux: policy capability open_perms=1 Nov 8 00:01:07.973379 kernel: SELinux: policy capability extended_socket_class=1 Nov 8 00:01:07.973395 kernel: SELinux: policy capability always_check_network=0 Nov 8 00:01:07.973407 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 8 00:01:07.973418 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 8 00:01:07.973428 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 8 00:01:07.973439 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 8 00:01:07.973449 kernel: SELinux: policy capability userspace_initial_context=0 Nov 8 00:01:07.973459 kernel: audit: type=1403 audit(1762560067.339:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 8 00:01:07.973475 systemd[1]: Successfully loaded SELinux policy in 60.490ms. Nov 8 00:01:07.973510 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.022ms. Nov 8 00:01:07.973522 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 8 00:01:07.973534 systemd[1]: Detected virtualization kvm. Nov 8 00:01:07.973545 systemd[1]: Detected architecture arm64. Nov 8 00:01:07.973555 systemd[1]: Detected first boot. Nov 8 00:01:07.973566 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 8 00:01:07.973578 zram_generator::config[1147]: No configuration found. Nov 8 00:01:07.973593 kernel: NET: Registered PF_VSOCK protocol family Nov 8 00:01:07.973604 systemd[1]: Populated /etc with preset unit settings. Nov 8 00:01:07.973615 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 8 00:01:07.973626 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 8 00:01:07.973637 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 8 00:01:07.973651 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 8 00:01:07.973662 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 8 00:01:07.973673 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 8 00:01:07.973684 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 8 00:01:07.973695 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 8 00:01:07.973706 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 8 00:01:07.973717 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 8 00:01:07.973729 systemd[1]: Created slice user.slice - User and Session Slice. Nov 8 00:01:07.973740 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 8 00:01:07.973751 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 8 00:01:07.973762 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 8 00:01:07.973772 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 8 00:01:07.973783 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 8 00:01:07.973794 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 8 00:01:07.973806 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 8 00:01:07.973817 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 8 00:01:07.973829 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 8 00:01:07.973839 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 8 00:01:07.973851 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 8 00:01:07.973868 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 8 00:01:07.973884 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 8 00:01:07.973895 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 8 00:01:07.973906 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 8 00:01:07.973917 systemd[1]: Reached target slices.target - Slice Units. Nov 8 00:01:07.973929 systemd[1]: Reached target swap.target - Swaps. Nov 8 00:01:07.973940 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 8 00:01:07.973951 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 8 00:01:07.973962 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 8 00:01:07.973973 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 8 00:01:07.973984 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 8 00:01:07.973996 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 8 00:01:07.974006 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 8 00:01:07.974017 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 8 00:01:07.974027 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 8 00:01:07.974037 systemd[1]: Mounting media.mount - External Media Directory... Nov 8 00:01:07.974050 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 8 00:01:07.974070 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 8 00:01:07.974082 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 8 00:01:07.974093 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 8 00:01:07.974104 systemd[1]: Reached target machines.target - Containers. Nov 8 00:01:07.974115 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 8 00:01:07.974126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:01:07.974139 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 8 00:01:07.974150 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 8 00:01:07.974161 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:01:07.974171 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:01:07.974183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:01:07.974193 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 8 00:01:07.974205 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:01:07.974216 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 8 00:01:07.974226 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 8 00:01:07.974237 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 8 00:01:07.974247 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 8 00:01:07.974258 systemd[1]: Stopped systemd-fsck-usr.service. Nov 8 00:01:07.974269 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 8 00:01:07.974281 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 8 00:01:07.974292 kernel: ACPI: bus type drm_connector registered Nov 8 00:01:07.974303 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 8 00:01:07.974314 kernel: fuse: init (API version 7.41) Nov 8 00:01:07.974324 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 8 00:01:07.974335 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 8 00:01:07.974346 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 8 00:01:07.974358 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 8 00:01:07.974369 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 8 00:01:07.974379 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 8 00:01:07.974389 systemd[1]: Mounted media.mount - External Media Directory. Nov 8 00:01:07.974408 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 8 00:01:07.974419 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 8 00:01:07.974429 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 8 00:01:07.974458 systemd-journald[1222]: Collecting audit messages is disabled. Nov 8 00:01:07.974480 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 8 00:01:07.974492 systemd-journald[1222]: Journal started Nov 8 00:01:07.974514 systemd-journald[1222]: Runtime Journal (/run/log/journal/62ce70b3ba2a4d569e2a2e6a1cb72174) is 6M, max 48.5M, 42.4M free. Nov 8 00:01:07.719921 systemd[1]: Queued start job for default target multi-user.target. Nov 8 00:01:07.740383 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 8 00:01:07.740842 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 8 00:01:07.978683 systemd[1]: Started systemd-journald.service - Journal Service. Nov 8 00:01:07.979842 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 8 00:01:07.981717 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 8 00:01:07.981918 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 8 00:01:07.983563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:01:07.983753 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:01:07.985285 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:01:07.985463 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:01:07.986852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:01:07.987035 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:01:07.988594 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 8 00:01:07.988755 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 8 00:01:07.990231 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:01:07.990409 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:01:07.991970 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 8 00:01:07.993709 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 8 00:01:07.996002 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 8 00:01:07.997834 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 8 00:01:08.011401 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 8 00:01:08.013396 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 8 00:01:08.015912 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 8 00:01:08.018160 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 8 00:01:08.019558 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 8 00:01:08.019586 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 8 00:01:08.021706 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 8 00:01:08.023371 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:01:08.025888 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 8 00:01:08.028220 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 8 00:01:08.029635 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:01:08.030593 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 8 00:01:08.031922 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:01:08.032979 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:01:08.039017 systemd-journald[1222]: Time spent on flushing to /var/log/journal/62ce70b3ba2a4d569e2a2e6a1cb72174 is 11.724ms for 872 entries. Nov 8 00:01:08.039017 systemd-journald[1222]: System Journal (/var/log/journal/62ce70b3ba2a4d569e2a2e6a1cb72174) is 8M, max 163.5M, 155.5M free. Nov 8 00:01:08.064469 systemd-journald[1222]: Received client request to flush runtime journal. Nov 8 00:01:08.064523 kernel: loop1: detected capacity change from 0 to 207008 Nov 8 00:01:08.039228 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 8 00:01:08.047290 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 8 00:01:08.049292 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 8 00:01:08.051565 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 8 00:01:08.054812 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 8 00:01:08.056501 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 8 00:01:08.058167 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:01:08.062588 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 8 00:01:08.066333 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 8 00:01:08.068259 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 8 00:01:08.091700 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 8 00:01:08.094222 kernel: loop2: detected capacity change from 0 to 119832 Nov 8 00:01:08.095910 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 8 00:01:08.098364 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 8 00:01:08.110359 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 8 00:01:08.112604 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 8 00:01:08.126088 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Nov 8 00:01:08.126110 systemd-tmpfiles[1279]: ACLs are not supported, ignoring. Nov 8 00:01:08.130401 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 8 00:01:08.131081 kernel: loop3: detected capacity change from 0 to 100624 Nov 8 00:01:08.147640 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 8 00:01:08.156068 kernel: loop4: detected capacity change from 0 to 207008 Nov 8 00:01:08.166078 kernel: loop5: detected capacity change from 0 to 119832 Nov 8 00:01:08.172071 kernel: loop6: detected capacity change from 0 to 100624 Nov 8 00:01:08.176431 (sd-merge)[1290]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 8 00:01:08.179294 (sd-merge)[1290]: Merged extensions into '/usr'. Nov 8 00:01:08.186216 systemd[1]: Reload requested from client PID 1263 ('systemd-sysext') (unit systemd-sysext.service)... Nov 8 00:01:08.186233 systemd[1]: Reloading... Nov 8 00:01:08.204757 systemd-resolved[1278]: Positive Trust Anchors: Nov 8 00:01:08.204828 systemd-resolved[1278]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 8 00:01:08.204837 systemd-resolved[1278]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 8 00:01:08.204876 systemd-resolved[1278]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 8 00:01:08.213431 systemd-resolved[1278]: Defaulting to hostname 'linux'. Nov 8 00:01:08.230086 zram_generator::config[1316]: No configuration found. Nov 8 00:01:08.376574 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 8 00:01:08.376675 systemd[1]: Reloading finished in 190 ms. Nov 8 00:01:08.405134 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 8 00:01:08.406906 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 8 00:01:08.410285 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 8 00:01:08.429477 systemd[1]: Starting ensure-sysext.service... Nov 8 00:01:08.431524 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 8 00:01:08.446654 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 8 00:01:08.446688 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 8 00:01:08.446953 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 8 00:01:08.447188 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 8 00:01:08.447590 systemd[1]: Reload requested from client PID 1353 ('systemctl') (unit ensure-sysext.service)... Nov 8 00:01:08.447607 systemd[1]: Reloading... Nov 8 00:01:08.447888 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 8 00:01:08.448154 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Nov 8 00:01:08.448290 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Nov 8 00:01:08.451955 systemd-tmpfiles[1354]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:01:08.451967 systemd-tmpfiles[1354]: Skipping /boot Nov 8 00:01:08.458474 systemd-tmpfiles[1354]: Detected autofs mount point /boot during canonicalization of boot. Nov 8 00:01:08.458488 systemd-tmpfiles[1354]: Skipping /boot Nov 8 00:01:08.494095 zram_generator::config[1384]: No configuration found. Nov 8 00:01:08.631724 systemd[1]: Reloading finished in 183 ms. Nov 8 00:01:08.656818 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 8 00:01:08.674741 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 8 00:01:08.682807 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 8 00:01:08.685355 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 8 00:01:08.687757 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 8 00:01:08.695382 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 8 00:01:08.697990 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 8 00:01:08.701375 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 8 00:01:08.708562 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:01:08.709881 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:01:08.717050 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 8 00:01:08.719782 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 8 00:01:08.721230 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:01:08.721392 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 8 00:01:08.723475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:01:08.727141 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:01:08.732158 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:01:08.733527 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 8 00:01:08.735272 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:01:08.735440 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 8 00:01:08.739760 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 8 00:01:08.745668 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 8 00:01:08.745874 systemd-udevd[1424]: Using default interface naming scheme 'v257'. Nov 8 00:01:08.747641 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 8 00:01:08.749856 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 8 00:01:08.749941 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 8 00:01:08.751017 systemd[1]: Finished ensure-sysext.service. Nov 8 00:01:08.754570 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 8 00:01:08.756372 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 8 00:01:08.756534 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 8 00:01:08.758179 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 8 00:01:08.763227 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 8 00:01:08.764731 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 8 00:01:08.764940 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 8 00:01:08.766906 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 8 00:01:08.767165 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 8 00:01:08.773549 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 8 00:01:08.773673 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 8 00:01:08.775581 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 8 00:01:08.780985 augenrules[1459]: No rules Nov 8 00:01:08.781792 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:01:08.782149 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 8 00:01:08.786701 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 8 00:01:08.797266 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 8 00:01:08.814712 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 8 00:01:08.817124 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 8 00:01:08.834202 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 8 00:01:08.846009 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 8 00:01:08.848834 systemd[1]: Reached target time-set.target - System Time Set. Nov 8 00:01:08.865290 systemd-networkd[1473]: lo: Link UP Nov 8 00:01:08.865299 systemd-networkd[1473]: lo: Gained carrier Nov 8 00:01:08.865973 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 8 00:01:08.872189 systemd[1]: Reached target network.target - Network. Nov 8 00:01:08.875096 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 8 00:01:08.878097 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 8 00:01:08.909100 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 8 00:01:08.911616 systemd-networkd[1473]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 8 00:01:08.911629 systemd-networkd[1473]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 8 00:01:08.912642 systemd-networkd[1473]: eth0: Link UP Nov 8 00:01:08.912745 systemd-networkd[1473]: eth0: Gained carrier Nov 8 00:01:08.912801 systemd-networkd[1473]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 8 00:01:08.927143 systemd-networkd[1473]: eth0: DHCPv4 address 10.0.0.84/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 8 00:01:08.928409 systemd-timesyncd[1457]: Network configuration changed, trying to establish connection. Nov 8 00:01:08.930206 systemd-timesyncd[1457]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 8 00:01:08.930266 systemd-timesyncd[1457]: Initial clock synchronization to Sat 2025-11-08 00:01:08.724784 UTC. Nov 8 00:01:08.931742 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 8 00:01:08.936268 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 8 00:01:08.958085 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 8 00:01:09.012431 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 8 00:01:09.066405 ldconfig[1422]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 8 00:01:09.067799 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 8 00:01:09.071904 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 8 00:01:09.074680 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 8 00:01:09.108636 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 8 00:01:09.110083 systemd[1]: Reached target sysinit.target - System Initialization. Nov 8 00:01:09.111269 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 8 00:01:09.112519 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 8 00:01:09.113967 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 8 00:01:09.115212 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 8 00:01:09.116591 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 8 00:01:09.118010 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 8 00:01:09.118061 systemd[1]: Reached target paths.target - Path Units. Nov 8 00:01:09.119095 systemd[1]: Reached target timers.target - Timer Units. Nov 8 00:01:09.121271 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 8 00:01:09.123840 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 8 00:01:09.126770 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 8 00:01:09.128401 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 8 00:01:09.129860 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 8 00:01:09.134915 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 8 00:01:09.136421 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 8 00:01:09.138248 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 8 00:01:09.139475 systemd[1]: Reached target sockets.target - Socket Units. Nov 8 00:01:09.140508 systemd[1]: Reached target basic.target - Basic System. Nov 8 00:01:09.141554 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:01:09.141587 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 8 00:01:09.142548 systemd[1]: Starting containerd.service - containerd container runtime... Nov 8 00:01:09.144779 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 8 00:01:09.146763 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 8 00:01:09.149076 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 8 00:01:09.151126 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 8 00:01:09.152211 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 8 00:01:09.154206 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 8 00:01:09.155754 jq[1534]: false Nov 8 00:01:09.156597 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 8 00:01:09.160343 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 8 00:01:09.164466 extend-filesystems[1535]: Found /dev/vda6 Nov 8 00:01:09.167141 extend-filesystems[1535]: Found /dev/vda9 Nov 8 00:01:09.168367 extend-filesystems[1535]: Checking size of /dev/vda9 Nov 8 00:01:09.172205 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 8 00:01:09.175417 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 8 00:01:09.176459 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 8 00:01:09.176885 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 8 00:01:09.177191 extend-filesystems[1535]: Resized partition /dev/vda9 Nov 8 00:01:09.180197 systemd[1]: Starting update-engine.service - Update Engine... Nov 8 00:01:09.182008 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 8 00:01:09.182400 extend-filesystems[1556]: resize2fs 1.47.3 (8-Jul-2025) Nov 8 00:01:09.193093 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 8 00:01:09.195349 jq[1558]: true Nov 8 00:01:09.196068 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 8 00:01:09.197743 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 8 00:01:09.197989 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 8 00:01:09.198262 systemd[1]: motdgen.service: Deactivated successfully. Nov 8 00:01:09.198430 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 8 00:01:09.201954 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 8 00:01:09.202369 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 8 00:01:09.216637 (ntainerd)[1567]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 8 00:01:09.223500 jq[1566]: true Nov 8 00:01:09.228026 update_engine[1555]: I20251108 00:01:09.226748 1555 main.cc:92] Flatcar Update Engine starting Nov 8 00:01:09.234835 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 8 00:01:09.234898 tar[1565]: linux-arm64/LICENSE Nov 8 00:01:09.247360 dbus-daemon[1532]: [system] SELinux support is enabled Nov 8 00:01:09.247753 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 8 00:01:09.249631 tar[1565]: linux-arm64/helm Nov 8 00:01:09.249932 extend-filesystems[1556]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 8 00:01:09.249932 extend-filesystems[1556]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 8 00:01:09.249932 extend-filesystems[1556]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 8 00:01:09.257146 extend-filesystems[1535]: Resized filesystem in /dev/vda9 Nov 8 00:01:09.258179 update_engine[1555]: I20251108 00:01:09.252235 1555 update_check_scheduler.cc:74] Next update check in 6m27s Nov 8 00:01:09.251895 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 8 00:01:09.254327 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 8 00:01:09.259086 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 8 00:01:09.259141 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 8 00:01:09.261283 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 8 00:01:09.261314 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 8 00:01:09.263965 systemd[1]: Started update-engine.service - Update Engine. Nov 8 00:01:09.267499 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 8 00:01:09.270679 bash[1598]: Updated "/home/core/.ssh/authorized_keys" Nov 8 00:01:09.279033 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 8 00:01:09.281241 systemd-logind[1554]: Watching system buttons on /dev/input/event0 (Power Button) Nov 8 00:01:09.282727 systemd-logind[1554]: New seat seat0. Nov 8 00:01:09.286282 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 8 00:01:09.289818 systemd[1]: Started systemd-logind.service - User Login Management. Nov 8 00:01:09.324718 locksmithd[1600]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 8 00:01:09.374586 containerd[1567]: time="2025-11-08T00:01:09Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 8 00:01:09.376558 containerd[1567]: time="2025-11-08T00:01:09.376524082Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Nov 8 00:01:09.387627 containerd[1567]: time="2025-11-08T00:01:09.386356561Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.12µs" Nov 8 00:01:09.387627 containerd[1567]: time="2025-11-08T00:01:09.386388753Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 8 00:01:09.387627 containerd[1567]: time="2025-11-08T00:01:09.386404888Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 8 00:01:09.387627 containerd[1567]: time="2025-11-08T00:01:09.386537591Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 8 00:01:09.387627 containerd[1567]: time="2025-11-08T00:01:09.386552557Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 8 00:01:09.387627 containerd[1567]: time="2025-11-08T00:01:09.386575123Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 8 00:01:09.387627 containerd[1567]: time="2025-11-08T00:01:09.386625827Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 8 00:01:09.387627 containerd[1567]: time="2025-11-08T00:01:09.386636428Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 8 00:01:09.387627 containerd[1567]: time="2025-11-08T00:01:09.386829345Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 8 00:01:09.387627 containerd[1567]: time="2025-11-08T00:01:09.386845051Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 8 00:01:09.387627 containerd[1567]: time="2025-11-08T00:01:09.386855847Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 8 00:01:09.387627 containerd[1567]: time="2025-11-08T00:01:09.386863408Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 8 00:01:09.387913 containerd[1567]: time="2025-11-08T00:01:09.386935859Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 8 00:01:09.387913 containerd[1567]: time="2025-11-08T00:01:09.387160267Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 8 00:01:09.387913 containerd[1567]: time="2025-11-08T00:01:09.387190120Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 8 00:01:09.387913 containerd[1567]: time="2025-11-08T00:01:09.387200370Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 8 00:01:09.387913 containerd[1567]: time="2025-11-08T00:01:09.387238564Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 8 00:01:09.387913 containerd[1567]: time="2025-11-08T00:01:09.387440718Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 8 00:01:09.387913 containerd[1567]: time="2025-11-08T00:01:09.387502452Z" level=info msg="metadata content store policy set" policy=shared Nov 8 00:01:09.391731 containerd[1567]: time="2025-11-08T00:01:09.391704662Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 8 00:01:09.391841 containerd[1567]: time="2025-11-08T00:01:09.391828091Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 8 00:01:09.391957 containerd[1567]: time="2025-11-08T00:01:09.391942243Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 8 00:01:09.392025 containerd[1567]: time="2025-11-08T00:01:09.392010057Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 8 00:01:09.392101 containerd[1567]: time="2025-11-08T00:01:09.392087808Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 8 00:01:09.392153 containerd[1567]: time="2025-11-08T00:01:09.392140617Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 8 00:01:09.392204 containerd[1567]: time="2025-11-08T00:01:09.392191672Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 8 00:01:09.392252 containerd[1567]: time="2025-11-08T00:01:09.392240311Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 8 00:01:09.392314 containerd[1567]: time="2025-11-08T00:01:09.392300290Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 8 00:01:09.392366 containerd[1567]: time="2025-11-08T00:01:09.392353917Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 8 00:01:09.392421 containerd[1567]: time="2025-11-08T00:01:09.392408051Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 8 00:01:09.392472 containerd[1567]: time="2025-11-08T00:01:09.392459496Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 8 00:01:09.392677 containerd[1567]: time="2025-11-08T00:01:09.392653193Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 8 00:01:09.392752 containerd[1567]: time="2025-11-08T00:01:09.392737219Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 8 00:01:09.392818 containerd[1567]: time="2025-11-08T00:01:09.392804409Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 8 00:01:09.392872 containerd[1567]: time="2025-11-08T00:01:09.392860258Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 8 00:01:09.392922 containerd[1567]: time="2025-11-08T00:01:09.392909715Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 8 00:01:09.392971 containerd[1567]: time="2025-11-08T00:01:09.392958626Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 8 00:01:09.393039 containerd[1567]: time="2025-11-08T00:01:09.393017203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 8 00:01:09.393144 containerd[1567]: time="2025-11-08T00:01:09.393125003Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 8 00:01:09.393203 containerd[1567]: time="2025-11-08T00:01:09.393190205Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 8 00:01:09.393262 containerd[1567]: time="2025-11-08T00:01:09.393248509Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 8 00:01:09.393311 containerd[1567]: time="2025-11-08T00:01:09.393298668Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 8 00:01:09.393530 containerd[1567]: time="2025-11-08T00:01:09.393514306Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 8 00:01:09.393589 containerd[1567]: time="2025-11-08T00:01:09.393578495Z" level=info msg="Start snapshots syncer" Nov 8 00:01:09.393676 containerd[1567]: time="2025-11-08T00:01:09.393662210Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 8 00:01:09.394421 containerd[1567]: time="2025-11-08T00:01:09.394373939Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 8 00:01:09.394609 containerd[1567]: time="2025-11-08T00:01:09.394590435Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 8 00:01:09.394717 containerd[1567]: time="2025-11-08T00:01:09.394703497Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 8 00:01:09.394982 containerd[1567]: time="2025-11-08T00:01:09.394961694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 8 00:01:09.395117 containerd[1567]: time="2025-11-08T00:01:09.395099036Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 8 00:01:09.395175 containerd[1567]: time="2025-11-08T00:01:09.395160419Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 8 00:01:09.395228 containerd[1567]: time="2025-11-08T00:01:09.395215722Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 8 00:01:09.395276 containerd[1567]: time="2025-11-08T00:01:09.395265413Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 8 00:01:09.395325 containerd[1567]: time="2025-11-08T00:01:09.395313389Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 8 00:01:09.395383 containerd[1567]: time="2025-11-08T00:01:09.395361209Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 8 00:01:09.395467 containerd[1567]: time="2025-11-08T00:01:09.395442234Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 8 00:01:09.395542 containerd[1567]: time="2025-11-08T00:01:09.395527157Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 8 00:01:09.395593 containerd[1567]: time="2025-11-08T00:01:09.395580745Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 8 00:01:09.395677 containerd[1567]: time="2025-11-08T00:01:09.395663290Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 8 00:01:09.395802 containerd[1567]: time="2025-11-08T00:01:09.395784224Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 8 00:01:09.395854 containerd[1567]: time="2025-11-08T00:01:09.395841554Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 8 00:01:09.395901 containerd[1567]: time="2025-11-08T00:01:09.395888829Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 8 00:01:09.395955 containerd[1567]: time="2025-11-08T00:01:09.395942105Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 8 00:01:09.396003 containerd[1567]: time="2025-11-08T00:01:09.395991991Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 8 00:01:09.396074 containerd[1567]: time="2025-11-08T00:01:09.396041175Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 8 00:01:09.396189 containerd[1567]: time="2025-11-08T00:01:09.396177737Z" level=info msg="runtime interface created" Nov 8 00:01:09.396230 containerd[1567]: time="2025-11-08T00:01:09.396219750Z" level=info msg="created NRI interface" Nov 8 00:01:09.396298 containerd[1567]: time="2025-11-08T00:01:09.396284602Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 8 00:01:09.396347 containerd[1567]: time="2025-11-08T00:01:09.396335384Z" level=info msg="Connect containerd service" Nov 8 00:01:09.396430 containerd[1567]: time="2025-11-08T00:01:09.396407134Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 8 00:01:09.397265 containerd[1567]: time="2025-11-08T00:01:09.397236445Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:01:09.468225 containerd[1567]: time="2025-11-08T00:01:09.468170259Z" level=info msg="Start subscribing containerd event" Nov 8 00:01:09.468753 containerd[1567]: time="2025-11-08T00:01:09.468731513Z" level=info msg="Start recovering state" Nov 8 00:01:09.468903 containerd[1567]: time="2025-11-08T00:01:09.468882924Z" level=info msg="Start event monitor" Nov 8 00:01:09.468963 containerd[1567]: time="2025-11-08T00:01:09.468949763Z" level=info msg="Start cni network conf syncer for default" Nov 8 00:01:09.469106 containerd[1567]: time="2025-11-08T00:01:09.469091080Z" level=info msg="Start streaming server" Nov 8 00:01:09.469163 containerd[1567]: time="2025-11-08T00:01:09.469151605Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 8 00:01:09.469361 containerd[1567]: time="2025-11-08T00:01:09.469345380Z" level=info msg="runtime interface starting up..." Nov 8 00:01:09.469417 containerd[1567]: time="2025-11-08T00:01:09.469406997Z" level=info msg="starting plugins..." Nov 8 00:01:09.469475 containerd[1567]: time="2025-11-08T00:01:09.469462573Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 8 00:01:09.470005 containerd[1567]: time="2025-11-08T00:01:09.468689461Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 8 00:01:09.470172 containerd[1567]: time="2025-11-08T00:01:09.470149437Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 8 00:01:09.470417 containerd[1567]: time="2025-11-08T00:01:09.470393409Z" level=info msg="containerd successfully booted in 0.096153s" Nov 8 00:01:09.470573 systemd[1]: Started containerd.service - containerd container runtime. Nov 8 00:01:09.541904 tar[1565]: linux-arm64/README.md Nov 8 00:01:09.562106 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 8 00:01:10.091192 systemd-networkd[1473]: eth0: Gained IPv6LL Nov 8 00:01:10.093363 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 8 00:01:10.095547 systemd[1]: Reached target network-online.target - Network is Online. Nov 8 00:01:10.098550 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 8 00:01:10.101067 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:01:10.105246 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 8 00:01:10.132727 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 8 00:01:10.132925 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 8 00:01:10.135855 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 8 00:01:10.137992 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 8 00:01:10.155396 sshd_keygen[1562]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 8 00:01:10.176198 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 8 00:01:10.179455 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 8 00:01:10.197397 systemd[1]: issuegen.service: Deactivated successfully. Nov 8 00:01:10.197599 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 8 00:01:10.200469 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 8 00:01:10.220918 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 8 00:01:10.224664 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 8 00:01:10.227663 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 8 00:01:10.229501 systemd[1]: Reached target getty.target - Login Prompts. Nov 8 00:01:10.697087 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:01:10.698709 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 8 00:01:10.700001 systemd[1]: Startup finished in 1.225s (kernel) + 5.208s (initrd) + 3.421s (userspace) = 9.855s. Nov 8 00:01:10.701856 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:01:11.065439 kubelet[1671]: E1108 00:01:11.065377 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:01:11.067584 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:01:11.067726 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:01:11.068111 systemd[1]: kubelet.service: Consumed 762ms CPU time, 258M memory peak. Nov 8 00:01:13.096395 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 8 00:01:13.097496 systemd[1]: Started sshd@0-10.0.0.84:22-10.0.0.1:53580.service - OpenSSH per-connection server daemon (10.0.0.1:53580). Nov 8 00:01:13.211585 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 53580 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:01:13.213324 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:01:13.223695 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 8 00:01:13.225151 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 8 00:01:13.227909 systemd-logind[1554]: New session 1 of user core. Nov 8 00:01:13.247698 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 8 00:01:13.250244 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 8 00:01:13.270394 (systemd)[1690]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 8 00:01:13.273467 systemd-logind[1554]: New session c1 of user core. Nov 8 00:01:13.370717 systemd[1690]: Queued start job for default target default.target. Nov 8 00:01:13.379951 systemd[1690]: Created slice app.slice - User Application Slice. Nov 8 00:01:13.379979 systemd[1690]: Reached target paths.target - Paths. Nov 8 00:01:13.380016 systemd[1690]: Reached target timers.target - Timers. Nov 8 00:01:13.381207 systemd[1690]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 8 00:01:13.390642 systemd[1690]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 8 00:01:13.390709 systemd[1690]: Reached target sockets.target - Sockets. Nov 8 00:01:13.390747 systemd[1690]: Reached target basic.target - Basic System. Nov 8 00:01:13.390774 systemd[1690]: Reached target default.target - Main User Target. Nov 8 00:01:13.390799 systemd[1690]: Startup finished in 111ms. Nov 8 00:01:13.391064 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 8 00:01:13.392489 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 8 00:01:13.458957 systemd[1]: Started sshd@1-10.0.0.84:22-10.0.0.1:53586.service - OpenSSH per-connection server daemon (10.0.0.1:53586). Nov 8 00:01:13.515029 sshd[1701]: Accepted publickey for core from 10.0.0.1 port 53586 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:01:13.516373 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:01:13.520278 systemd-logind[1554]: New session 2 of user core. Nov 8 00:01:13.531234 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 8 00:01:13.581539 sshd[1704]: Connection closed by 10.0.0.1 port 53586 Nov 8 00:01:13.581989 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Nov 8 00:01:13.592947 systemd[1]: sshd@1-10.0.0.84:22-10.0.0.1:53586.service: Deactivated successfully. Nov 8 00:01:13.594625 systemd[1]: session-2.scope: Deactivated successfully. Nov 8 00:01:13.597295 systemd-logind[1554]: Session 2 logged out. Waiting for processes to exit. Nov 8 00:01:13.598769 systemd[1]: Started sshd@2-10.0.0.84:22-10.0.0.1:53592.service - OpenSSH per-connection server daemon (10.0.0.1:53592). Nov 8 00:01:13.599825 systemd-logind[1554]: Removed session 2. Nov 8 00:01:13.655418 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 53592 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:01:13.656616 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:01:13.660901 systemd-logind[1554]: New session 3 of user core. Nov 8 00:01:13.671287 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 8 00:01:13.718608 sshd[1713]: Connection closed by 10.0.0.1 port 53592 Nov 8 00:01:13.718933 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Nov 8 00:01:13.728975 systemd[1]: sshd@2-10.0.0.84:22-10.0.0.1:53592.service: Deactivated successfully. Nov 8 00:01:13.730556 systemd[1]: session-3.scope: Deactivated successfully. Nov 8 00:01:13.731278 systemd-logind[1554]: Session 3 logged out. Waiting for processes to exit. Nov 8 00:01:13.733856 systemd[1]: Started sshd@3-10.0.0.84:22-10.0.0.1:53602.service - OpenSSH per-connection server daemon (10.0.0.1:53602). Nov 8 00:01:13.734601 systemd-logind[1554]: Removed session 3. Nov 8 00:01:13.789905 sshd[1719]: Accepted publickey for core from 10.0.0.1 port 53602 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:01:13.791199 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:01:13.796163 systemd-logind[1554]: New session 4 of user core. Nov 8 00:01:13.807243 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 8 00:01:13.859402 sshd[1722]: Connection closed by 10.0.0.1 port 53602 Nov 8 00:01:13.859742 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Nov 8 00:01:13.879226 systemd[1]: sshd@3-10.0.0.84:22-10.0.0.1:53602.service: Deactivated successfully. Nov 8 00:01:13.881554 systemd[1]: session-4.scope: Deactivated successfully. Nov 8 00:01:13.882956 systemd-logind[1554]: Session 4 logged out. Waiting for processes to exit. Nov 8 00:01:13.885314 systemd[1]: Started sshd@4-10.0.0.84:22-10.0.0.1:53616.service - OpenSSH per-connection server daemon (10.0.0.1:53616). Nov 8 00:01:13.886360 systemd-logind[1554]: Removed session 4. Nov 8 00:01:13.940017 sshd[1728]: Accepted publickey for core from 10.0.0.1 port 53616 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:01:13.941625 sshd-session[1728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:01:13.945713 systemd-logind[1554]: New session 5 of user core. Nov 8 00:01:13.956269 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 8 00:01:14.013819 sudo[1732]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 8 00:01:14.014134 sudo[1732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:01:14.035926 sudo[1732]: pam_unix(sudo:session): session closed for user root Nov 8 00:01:14.037671 sshd[1731]: Connection closed by 10.0.0.1 port 53616 Nov 8 00:01:14.038254 sshd-session[1728]: pam_unix(sshd:session): session closed for user core Nov 8 00:01:14.047336 systemd[1]: sshd@4-10.0.0.84:22-10.0.0.1:53616.service: Deactivated successfully. Nov 8 00:01:14.048926 systemd[1]: session-5.scope: Deactivated successfully. Nov 8 00:01:14.050690 systemd-logind[1554]: Session 5 logged out. Waiting for processes to exit. Nov 8 00:01:14.052769 systemd[1]: Started sshd@5-10.0.0.84:22-10.0.0.1:53624.service - OpenSSH per-connection server daemon (10.0.0.1:53624). Nov 8 00:01:14.053674 systemd-logind[1554]: Removed session 5. Nov 8 00:01:14.111901 sshd[1738]: Accepted publickey for core from 10.0.0.1 port 53624 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:01:14.113263 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:01:14.117019 systemd-logind[1554]: New session 6 of user core. Nov 8 00:01:14.125203 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 8 00:01:14.177082 sudo[1744]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 8 00:01:14.177355 sudo[1744]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:01:14.183450 sudo[1744]: pam_unix(sudo:session): session closed for user root Nov 8 00:01:14.189639 sudo[1743]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 8 00:01:14.189895 sudo[1743]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:01:14.200304 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 8 00:01:14.245262 augenrules[1766]: No rules Nov 8 00:01:14.246363 systemd[1]: audit-rules.service: Deactivated successfully. Nov 8 00:01:14.247134 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 8 00:01:14.248393 sudo[1743]: pam_unix(sudo:session): session closed for user root Nov 8 00:01:14.250142 sshd[1742]: Connection closed by 10.0.0.1 port 53624 Nov 8 00:01:14.250361 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Nov 8 00:01:14.262193 systemd[1]: sshd@5-10.0.0.84:22-10.0.0.1:53624.service: Deactivated successfully. Nov 8 00:01:14.265576 systemd[1]: session-6.scope: Deactivated successfully. Nov 8 00:01:14.266327 systemd-logind[1554]: Session 6 logged out. Waiting for processes to exit. Nov 8 00:01:14.268308 systemd[1]: Started sshd@6-10.0.0.84:22-10.0.0.1:53630.service - OpenSSH per-connection server daemon (10.0.0.1:53630). Nov 8 00:01:14.269596 systemd-logind[1554]: Removed session 6. Nov 8 00:01:14.329663 sshd[1775]: Accepted publickey for core from 10.0.0.1 port 53630 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:01:14.330996 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:01:14.336107 systemd-logind[1554]: New session 7 of user core. Nov 8 00:01:14.343198 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 8 00:01:14.395094 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 8 00:01:14.395339 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 8 00:01:14.675794 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 8 00:01:14.700387 (dockerd)[1801]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 8 00:01:14.901917 dockerd[1801]: time="2025-11-08T00:01:14.901856397Z" level=info msg="Starting up" Nov 8 00:01:14.902596 dockerd[1801]: time="2025-11-08T00:01:14.902567667Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 8 00:01:14.912542 dockerd[1801]: time="2025-11-08T00:01:14.912507889Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 8 00:01:15.099490 dockerd[1801]: time="2025-11-08T00:01:15.099405299Z" level=info msg="Loading containers: start." Nov 8 00:01:15.108072 kernel: Initializing XFRM netlink socket Nov 8 00:01:15.315309 systemd-networkd[1473]: docker0: Link UP Nov 8 00:01:15.318453 dockerd[1801]: time="2025-11-08T00:01:15.318402479Z" level=info msg="Loading containers: done." Nov 8 00:01:15.329920 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3742730454-merged.mount: Deactivated successfully. Nov 8 00:01:15.332074 dockerd[1801]: time="2025-11-08T00:01:15.332018630Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 8 00:01:15.332155 dockerd[1801]: time="2025-11-08T00:01:15.332132542Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 8 00:01:15.332236 dockerd[1801]: time="2025-11-08T00:01:15.332215693Z" level=info msg="Initializing buildkit" Nov 8 00:01:15.354197 dockerd[1801]: time="2025-11-08T00:01:15.353981345Z" level=info msg="Completed buildkit initialization" Nov 8 00:01:15.359137 dockerd[1801]: time="2025-11-08T00:01:15.359084590Z" level=info msg="Daemon has completed initialization" Nov 8 00:01:15.359319 dockerd[1801]: time="2025-11-08T00:01:15.359190594Z" level=info msg="API listen on /run/docker.sock" Nov 8 00:01:15.359378 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 8 00:01:15.950936 containerd[1567]: time="2025-11-08T00:01:15.950896407Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 8 00:01:16.497105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3894920599.mount: Deactivated successfully. Nov 8 00:01:17.869578 containerd[1567]: time="2025-11-08T00:01:17.869532669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:17.870922 containerd[1567]: time="2025-11-08T00:01:17.870880753Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Nov 8 00:01:17.871891 containerd[1567]: time="2025-11-08T00:01:17.871870823Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:17.875775 containerd[1567]: time="2025-11-08T00:01:17.875731627Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:17.876783 containerd[1567]: time="2025-11-08T00:01:17.876627614Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.925682849s" Nov 8 00:01:17.876783 containerd[1567]: time="2025-11-08T00:01:17.876669441Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Nov 8 00:01:17.877232 containerd[1567]: time="2025-11-08T00:01:17.877207256Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 8 00:01:19.131443 containerd[1567]: time="2025-11-08T00:01:19.131387913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:19.131971 containerd[1567]: time="2025-11-08T00:01:19.131939920Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Nov 8 00:01:19.132987 containerd[1567]: time="2025-11-08T00:01:19.132941868Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:19.136078 containerd[1567]: time="2025-11-08T00:01:19.135570357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:19.136609 containerd[1567]: time="2025-11-08T00:01:19.136416326Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.259178367s" Nov 8 00:01:19.136609 containerd[1567]: time="2025-11-08T00:01:19.136443501Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Nov 8 00:01:19.137126 containerd[1567]: time="2025-11-08T00:01:19.137104527Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 8 00:01:20.363192 containerd[1567]: time="2025-11-08T00:01:20.362437813Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:20.363555 containerd[1567]: time="2025-11-08T00:01:20.363304700Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Nov 8 00:01:20.364555 containerd[1567]: time="2025-11-08T00:01:20.364526200Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:20.369375 containerd[1567]: time="2025-11-08T00:01:20.369332153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:20.371157 containerd[1567]: time="2025-11-08T00:01:20.371114678Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.233974703s" Nov 8 00:01:20.371193 containerd[1567]: time="2025-11-08T00:01:20.371163349Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Nov 8 00:01:20.371674 containerd[1567]: time="2025-11-08T00:01:20.371642063Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 8 00:01:21.130389 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 8 00:01:21.131671 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:01:21.285213 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:01:21.289316 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 8 00:01:21.340072 kubelet[2100]: E1108 00:01:21.339959 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 8 00:01:21.343106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 8 00:01:21.343259 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 8 00:01:21.343615 systemd[1]: kubelet.service: Consumed 149ms CPU time, 106.6M memory peak. Nov 8 00:01:21.445388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount737895002.mount: Deactivated successfully. Nov 8 00:01:21.668661 containerd[1567]: time="2025-11-08T00:01:21.668604100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:21.669813 containerd[1567]: time="2025-11-08T00:01:21.669785162Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Nov 8 00:01:21.671027 containerd[1567]: time="2025-11-08T00:01:21.670690497Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:21.672555 containerd[1567]: time="2025-11-08T00:01:21.672526236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:21.673030 containerd[1567]: time="2025-11-08T00:01:21.672992811Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.301319644s" Nov 8 00:01:21.673096 containerd[1567]: time="2025-11-08T00:01:21.673030336Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Nov 8 00:01:21.673541 containerd[1567]: time="2025-11-08T00:01:21.673512272Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 8 00:01:22.250178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3747309173.mount: Deactivated successfully. Nov 8 00:01:23.174095 containerd[1567]: time="2025-11-08T00:01:23.174021575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:23.175094 containerd[1567]: time="2025-11-08T00:01:23.174771594Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Nov 8 00:01:23.175951 containerd[1567]: time="2025-11-08T00:01:23.175913098Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:23.178755 containerd[1567]: time="2025-11-08T00:01:23.178721815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:23.180079 containerd[1567]: time="2025-11-08T00:01:23.179890212Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.506345734s" Nov 8 00:01:23.180079 containerd[1567]: time="2025-11-08T00:01:23.179968301Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 8 00:01:23.180611 containerd[1567]: time="2025-11-08T00:01:23.180533017Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 8 00:01:23.591951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2978691128.mount: Deactivated successfully. Nov 8 00:01:23.598390 containerd[1567]: time="2025-11-08T00:01:23.598318501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:01:23.599716 containerd[1567]: time="2025-11-08T00:01:23.599668615Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 8 00:01:23.600620 containerd[1567]: time="2025-11-08T00:01:23.600563977Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:01:23.603623 containerd[1567]: time="2025-11-08T00:01:23.603560668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 8 00:01:23.604904 containerd[1567]: time="2025-11-08T00:01:23.604297220Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 423.737111ms" Nov 8 00:01:23.604904 containerd[1567]: time="2025-11-08T00:01:23.604324751Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 8 00:01:23.604904 containerd[1567]: time="2025-11-08T00:01:23.604901260Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 8 00:01:24.122083 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2323355380.mount: Deactivated successfully. Nov 8 00:01:26.380100 containerd[1567]: time="2025-11-08T00:01:26.379976465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:26.380552 containerd[1567]: time="2025-11-08T00:01:26.380528039Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Nov 8 00:01:26.381641 containerd[1567]: time="2025-11-08T00:01:26.381614709Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:26.384478 containerd[1567]: time="2025-11-08T00:01:26.384272323Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:26.386446 containerd[1567]: time="2025-11-08T00:01:26.386410119Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.781481395s" Nov 8 00:01:26.386557 containerd[1567]: time="2025-11-08T00:01:26.386540173Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 8 00:01:31.027565 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:01:31.027762 systemd[1]: kubelet.service: Consumed 149ms CPU time, 106.6M memory peak. Nov 8 00:01:31.029943 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:01:31.057922 systemd[1]: Reload requested from client PID 2252 ('systemctl') (unit session-7.scope)... Nov 8 00:01:31.057942 systemd[1]: Reloading... Nov 8 00:01:31.136085 zram_generator::config[2296]: No configuration found. Nov 8 00:01:31.310331 systemd[1]: Reloading finished in 252 ms. Nov 8 00:01:31.374729 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 8 00:01:31.375115 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 8 00:01:31.375417 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:01:31.375461 systemd[1]: kubelet.service: Consumed 98ms CPU time, 95.1M memory peak. Nov 8 00:01:31.377007 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:01:31.536178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:01:31.540964 (kubelet)[2340]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:01:31.599638 kubelet[2340]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:01:31.599638 kubelet[2340]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:01:31.599638 kubelet[2340]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:01:31.599638 kubelet[2340]: I1108 00:01:31.599456 2340 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:01:32.273082 kubelet[2340]: I1108 00:01:32.272796 2340 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:01:32.273082 kubelet[2340]: I1108 00:01:32.272829 2340 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:01:32.273318 kubelet[2340]: I1108 00:01:32.273303 2340 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:01:32.300461 kubelet[2340]: E1108 00:01:32.300409 2340 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.84:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:01:32.301963 kubelet[2340]: I1108 00:01:32.301899 2340 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:01:32.307929 kubelet[2340]: I1108 00:01:32.307873 2340 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 8 00:01:32.311312 kubelet[2340]: I1108 00:01:32.311279 2340 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:01:32.311925 kubelet[2340]: I1108 00:01:32.311862 2340 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:01:32.312097 kubelet[2340]: I1108 00:01:32.311908 2340 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:01:32.312189 kubelet[2340]: I1108 00:01:32.312166 2340 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:01:32.312189 kubelet[2340]: I1108 00:01:32.312176 2340 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:01:32.312367 kubelet[2340]: I1108 00:01:32.312351 2340 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:01:32.316238 kubelet[2340]: I1108 00:01:32.316199 2340 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:01:32.316238 kubelet[2340]: I1108 00:01:32.316229 2340 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:01:32.316308 kubelet[2340]: I1108 00:01:32.316254 2340 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:01:32.316308 kubelet[2340]: I1108 00:01:32.316268 2340 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:01:32.319245 kubelet[2340]: I1108 00:01:32.318848 2340 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 8 00:01:32.319245 kubelet[2340]: W1108 00:01:32.319139 2340 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Nov 8 00:01:32.319245 kubelet[2340]: E1108 00:01:32.319195 2340 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.84:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:01:32.319404 kubelet[2340]: W1108 00:01:32.319362 2340 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Nov 8 00:01:32.319436 kubelet[2340]: E1108 00:01:32.319413 2340 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.84:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:01:32.319480 kubelet[2340]: I1108 00:01:32.319465 2340 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:01:32.319585 kubelet[2340]: W1108 00:01:32.319575 2340 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 8 00:01:32.320451 kubelet[2340]: I1108 00:01:32.320436 2340 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:01:32.320488 kubelet[2340]: I1108 00:01:32.320473 2340 server.go:1287] "Started kubelet" Nov 8 00:01:32.320581 kubelet[2340]: I1108 00:01:32.320554 2340 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:01:32.321460 kubelet[2340]: I1108 00:01:32.321440 2340 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:01:32.325084 kubelet[2340]: I1108 00:01:32.325018 2340 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:01:32.326103 kubelet[2340]: I1108 00:01:32.325563 2340 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:01:32.326103 kubelet[2340]: E1108 00:01:32.325859 2340 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.84:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.84:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875df10c60a4711 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-08 00:01:32.320450321 +0000 UTC m=+0.776012745,LastTimestamp:2025-11-08 00:01:32.320450321 +0000 UTC m=+0.776012745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 8 00:01:32.327077 kubelet[2340]: I1108 00:01:32.327040 2340 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:01:32.327872 kubelet[2340]: I1108 00:01:32.327841 2340 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:01:32.328481 kubelet[2340]: I1108 00:01:32.328459 2340 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:01:32.329446 kubelet[2340]: E1108 00:01:32.329429 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:01:32.329874 kubelet[2340]: I1108 00:01:32.329845 2340 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:01:32.329934 kubelet[2340]: I1108 00:01:32.329917 2340 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:01:32.331202 kubelet[2340]: W1108 00:01:32.331156 2340 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Nov 8 00:01:32.331304 kubelet[2340]: E1108 00:01:32.331214 2340 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:01:32.331545 kubelet[2340]: I1108 00:01:32.331520 2340 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:01:32.331596 kubelet[2340]: E1108 00:01:32.331538 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="200ms" Nov 8 00:01:32.331656 kubelet[2340]: I1108 00:01:32.331605 2340 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:01:32.332345 kubelet[2340]: E1108 00:01:32.332285 2340 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:01:32.332954 kubelet[2340]: I1108 00:01:32.332933 2340 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:01:32.345121 kubelet[2340]: I1108 00:01:32.345080 2340 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:01:32.346167 kubelet[2340]: I1108 00:01:32.346138 2340 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:01:32.346167 kubelet[2340]: I1108 00:01:32.346164 2340 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:01:32.346261 kubelet[2340]: I1108 00:01:32.346183 2340 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:01:32.346261 kubelet[2340]: I1108 00:01:32.346191 2340 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:01:32.346261 kubelet[2340]: E1108 00:01:32.346230 2340 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:01:32.348518 kubelet[2340]: W1108 00:01:32.348475 2340 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Nov 8 00:01:32.348640 kubelet[2340]: E1108 00:01:32.348619 2340 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:01:32.348919 kubelet[2340]: I1108 00:01:32.348902 2340 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:01:32.348967 kubelet[2340]: I1108 00:01:32.348919 2340 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:01:32.348967 kubelet[2340]: I1108 00:01:32.348934 2340 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:01:32.429668 kubelet[2340]: E1108 00:01:32.429626 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:01:32.447536 kubelet[2340]: E1108 00:01:32.446946 2340 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:01:32.530406 kubelet[2340]: E1108 00:01:32.530287 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:01:32.532228 kubelet[2340]: E1108 00:01:32.532198 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="400ms" Nov 8 00:01:32.599541 kubelet[2340]: I1108 00:01:32.599476 2340 policy_none.go:49] "None policy: Start" Nov 8 00:01:32.599541 kubelet[2340]: I1108 00:01:32.599535 2340 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:01:32.599541 kubelet[2340]: I1108 00:01:32.599552 2340 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:01:32.616573 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 8 00:01:32.631160 kubelet[2340]: E1108 00:01:32.631100 2340 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:01:32.639133 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 8 00:01:32.642025 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 8 00:01:32.647946 kubelet[2340]: E1108 00:01:32.647910 2340 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 8 00:01:32.658051 kubelet[2340]: I1108 00:01:32.658012 2340 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:01:32.658266 kubelet[2340]: I1108 00:01:32.658247 2340 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:01:32.658301 kubelet[2340]: I1108 00:01:32.658263 2340 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:01:32.658550 kubelet[2340]: I1108 00:01:32.658477 2340 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:01:32.659577 kubelet[2340]: E1108 00:01:32.659401 2340 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:01:32.659577 kubelet[2340]: E1108 00:01:32.659447 2340 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 8 00:01:32.760143 kubelet[2340]: I1108 00:01:32.760092 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:01:32.760864 kubelet[2340]: E1108 00:01:32.760593 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Nov 8 00:01:32.933445 kubelet[2340]: E1108 00:01:32.933095 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="800ms" Nov 8 00:01:32.962687 kubelet[2340]: I1108 00:01:32.962436 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:01:32.963417 kubelet[2340]: E1108 00:01:32.963367 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Nov 8 00:01:33.060334 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 8 00:01:33.090228 kubelet[2340]: E1108 00:01:33.090187 2340 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:01:33.093250 systemd[1]: Created slice kubepods-burstable-podd34ff169f03dcb9e2794895dab11a6e7.slice - libcontainer container kubepods-burstable-podd34ff169f03dcb9e2794895dab11a6e7.slice. Nov 8 00:01:33.095197 kubelet[2340]: E1108 00:01:33.095162 2340 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:01:33.098678 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 8 00:01:33.100725 kubelet[2340]: E1108 00:01:33.100687 2340 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:01:33.135535 kubelet[2340]: I1108 00:01:33.135477 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:01:33.135535 kubelet[2340]: I1108 00:01:33.135524 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:01:33.135535 kubelet[2340]: I1108 00:01:33.135544 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:01:33.135731 kubelet[2340]: I1108 00:01:33.135599 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:01:33.135731 kubelet[2340]: I1108 00:01:33.135636 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:01:33.135731 kubelet[2340]: I1108 00:01:33.135693 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d34ff169f03dcb9e2794895dab11a6e7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d34ff169f03dcb9e2794895dab11a6e7\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:01:33.135731 kubelet[2340]: I1108 00:01:33.135717 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d34ff169f03dcb9e2794895dab11a6e7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d34ff169f03dcb9e2794895dab11a6e7\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:01:33.135816 kubelet[2340]: I1108 00:01:33.135733 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:01:33.135816 kubelet[2340]: I1108 00:01:33.135749 2340 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d34ff169f03dcb9e2794895dab11a6e7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d34ff169f03dcb9e2794895dab11a6e7\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:01:33.365276 kubelet[2340]: I1108 00:01:33.365239 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:01:33.365839 kubelet[2340]: E1108 00:01:33.365801 2340 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.84:6443/api/v1/nodes\": dial tcp 10.0.0.84:6443: connect: connection refused" node="localhost" Nov 8 00:01:33.391568 kubelet[2340]: E1108 00:01:33.391183 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:33.392212 containerd[1567]: time="2025-11-08T00:01:33.392182071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 8 00:01:33.396548 kubelet[2340]: E1108 00:01:33.396525 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:33.397134 containerd[1567]: time="2025-11-08T00:01:33.396923568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d34ff169f03dcb9e2794895dab11a6e7,Namespace:kube-system,Attempt:0,}" Nov 8 00:01:33.401552 kubelet[2340]: E1108 00:01:33.401511 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:33.401984 containerd[1567]: time="2025-11-08T00:01:33.401942855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 8 00:01:33.421346 containerd[1567]: time="2025-11-08T00:01:33.421300596Z" level=info msg="connecting to shim 7b60eb713b7463ff3de4f7ea8e8d7559aa9ad8a3e5be4ef1d9c07ab1111ceede" address="unix:///run/containerd/s/0665bf8531e9b67500e759286ee0114f1a7e30bf19785656fa0b69fbe3e6edde" namespace=k8s.io protocol=ttrpc version=3 Nov 8 00:01:33.439091 containerd[1567]: time="2025-11-08T00:01:33.438739097Z" level=info msg="connecting to shim 494ca8d1afe5d40a3069a26aeff1c1f234cb5abd4b6c5f2c52d00aa4bafaef45" address="unix:///run/containerd/s/0c09a54d2dec2f9cb0a96237573f923027af54351c30094526a25f4a8a86bb07" namespace=k8s.io protocol=ttrpc version=3 Nov 8 00:01:33.439091 containerd[1567]: time="2025-11-08T00:01:33.438894815Z" level=info msg="connecting to shim f23967af3d0278c7028b3ef228a95f9d0d11b52731c16e60b16dec1dc55cdb89" address="unix:///run/containerd/s/dabd5a8815f5075dc208234be6e8aaa4f414291eacbe2c3014a2c17317903fb2" namespace=k8s.io protocol=ttrpc version=3 Nov 8 00:01:33.462373 systemd[1]: Started cri-containerd-7b60eb713b7463ff3de4f7ea8e8d7559aa9ad8a3e5be4ef1d9c07ab1111ceede.scope - libcontainer container 7b60eb713b7463ff3de4f7ea8e8d7559aa9ad8a3e5be4ef1d9c07ab1111ceede. Nov 8 00:01:33.468266 systemd[1]: Started cri-containerd-494ca8d1afe5d40a3069a26aeff1c1f234cb5abd4b6c5f2c52d00aa4bafaef45.scope - libcontainer container 494ca8d1afe5d40a3069a26aeff1c1f234cb5abd4b6c5f2c52d00aa4bafaef45. Nov 8 00:01:33.470249 systemd[1]: Started cri-containerd-f23967af3d0278c7028b3ef228a95f9d0d11b52731c16e60b16dec1dc55cdb89.scope - libcontainer container f23967af3d0278c7028b3ef228a95f9d0d11b52731c16e60b16dec1dc55cdb89. Nov 8 00:01:33.482566 kubelet[2340]: W1108 00:01:33.482467 2340 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Nov 8 00:01:33.482566 kubelet[2340]: E1108 00:01:33.482532 2340 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.84:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:01:33.516678 containerd[1567]: time="2025-11-08T00:01:33.516630538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"494ca8d1afe5d40a3069a26aeff1c1f234cb5abd4b6c5f2c52d00aa4bafaef45\"" Nov 8 00:01:33.518340 kubelet[2340]: E1108 00:01:33.517824 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:33.521899 containerd[1567]: time="2025-11-08T00:01:33.521860007Z" level=info msg="CreateContainer within sandbox \"494ca8d1afe5d40a3069a26aeff1c1f234cb5abd4b6c5f2c52d00aa4bafaef45\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 8 00:01:33.526160 containerd[1567]: time="2025-11-08T00:01:33.525661444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b60eb713b7463ff3de4f7ea8e8d7559aa9ad8a3e5be4ef1d9c07ab1111ceede\"" Nov 8 00:01:33.527858 kubelet[2340]: E1108 00:01:33.527779 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:33.534212 containerd[1567]: time="2025-11-08T00:01:33.534167577Z" level=info msg="CreateContainer within sandbox \"7b60eb713b7463ff3de4f7ea8e8d7559aa9ad8a3e5be4ef1d9c07ab1111ceede\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 8 00:01:33.534933 containerd[1567]: time="2025-11-08T00:01:33.534895937Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d34ff169f03dcb9e2794895dab11a6e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"f23967af3d0278c7028b3ef228a95f9d0d11b52731c16e60b16dec1dc55cdb89\"" Nov 8 00:01:33.535691 kubelet[2340]: E1108 00:01:33.535667 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:33.538642 containerd[1567]: time="2025-11-08T00:01:33.537814215Z" level=info msg="CreateContainer within sandbox \"f23967af3d0278c7028b3ef228a95f9d0d11b52731c16e60b16dec1dc55cdb89\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 8 00:01:33.540616 containerd[1567]: time="2025-11-08T00:01:33.540564588Z" level=info msg="Container 346d9ff1a432534df434e712d633fdef7e2b3a69a0ea93c81fff992d00515e14: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:01:33.555027 containerd[1567]: time="2025-11-08T00:01:33.554976644Z" level=info msg="Container f70fddf1cb8c1b16407fb54df71bffbf2ae43d8602fadf3e533462c0a5051550: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:01:33.565075 containerd[1567]: time="2025-11-08T00:01:33.565018536Z" level=info msg="CreateContainer within sandbox \"494ca8d1afe5d40a3069a26aeff1c1f234cb5abd4b6c5f2c52d00aa4bafaef45\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"346d9ff1a432534df434e712d633fdef7e2b3a69a0ea93c81fff992d00515e14\"" Nov 8 00:01:33.565915 containerd[1567]: time="2025-11-08T00:01:33.565883794Z" level=info msg="StartContainer for \"346d9ff1a432534df434e712d633fdef7e2b3a69a0ea93c81fff992d00515e14\"" Nov 8 00:01:33.567574 containerd[1567]: time="2025-11-08T00:01:33.567536231Z" level=info msg="connecting to shim 346d9ff1a432534df434e712d633fdef7e2b3a69a0ea93c81fff992d00515e14" address="unix:///run/containerd/s/0c09a54d2dec2f9cb0a96237573f923027af54351c30094526a25f4a8a86bb07" protocol=ttrpc version=3 Nov 8 00:01:33.577071 containerd[1567]: time="2025-11-08T00:01:33.577006759Z" level=info msg="Container 13804b17fa851637030ceb024319f0bf659c041b4edbb87eff17d0c4ae3a6cd2: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:01:33.589564 containerd[1567]: time="2025-11-08T00:01:33.589504091Z" level=info msg="CreateContainer within sandbox \"7b60eb713b7463ff3de4f7ea8e8d7559aa9ad8a3e5be4ef1d9c07ab1111ceede\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f70fddf1cb8c1b16407fb54df71bffbf2ae43d8602fadf3e533462c0a5051550\"" Nov 8 00:01:33.590138 containerd[1567]: time="2025-11-08T00:01:33.590102986Z" level=info msg="StartContainer for \"f70fddf1cb8c1b16407fb54df71bffbf2ae43d8602fadf3e533462c0a5051550\"" Nov 8 00:01:33.590192 containerd[1567]: time="2025-11-08T00:01:33.590124045Z" level=info msg="CreateContainer within sandbox \"f23967af3d0278c7028b3ef228a95f9d0d11b52731c16e60b16dec1dc55cdb89\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"13804b17fa851637030ceb024319f0bf659c041b4edbb87eff17d0c4ae3a6cd2\"" Nov 8 00:01:33.590560 containerd[1567]: time="2025-11-08T00:01:33.590533378Z" level=info msg="StartContainer for \"13804b17fa851637030ceb024319f0bf659c041b4edbb87eff17d0c4ae3a6cd2\"" Nov 8 00:01:33.591426 containerd[1567]: time="2025-11-08T00:01:33.591387248Z" level=info msg="connecting to shim f70fddf1cb8c1b16407fb54df71bffbf2ae43d8602fadf3e533462c0a5051550" address="unix:///run/containerd/s/0665bf8531e9b67500e759286ee0114f1a7e30bf19785656fa0b69fbe3e6edde" protocol=ttrpc version=3 Nov 8 00:01:33.591804 containerd[1567]: time="2025-11-08T00:01:33.591774244Z" level=info msg="connecting to shim 13804b17fa851637030ceb024319f0bf659c041b4edbb87eff17d0c4ae3a6cd2" address="unix:///run/containerd/s/dabd5a8815f5075dc208234be6e8aaa4f414291eacbe2c3014a2c17317903fb2" protocol=ttrpc version=3 Nov 8 00:01:33.595281 systemd[1]: Started cri-containerd-346d9ff1a432534df434e712d633fdef7e2b3a69a0ea93c81fff992d00515e14.scope - libcontainer container 346d9ff1a432534df434e712d633fdef7e2b3a69a0ea93c81fff992d00515e14. Nov 8 00:01:33.619325 systemd[1]: Started cri-containerd-13804b17fa851637030ceb024319f0bf659c041b4edbb87eff17d0c4ae3a6cd2.scope - libcontainer container 13804b17fa851637030ceb024319f0bf659c041b4edbb87eff17d0c4ae3a6cd2. Nov 8 00:01:33.624333 systemd[1]: Started cri-containerd-f70fddf1cb8c1b16407fb54df71bffbf2ae43d8602fadf3e533462c0a5051550.scope - libcontainer container f70fddf1cb8c1b16407fb54df71bffbf2ae43d8602fadf3e533462c0a5051550. Nov 8 00:01:33.665085 containerd[1567]: time="2025-11-08T00:01:33.665017052Z" level=info msg="StartContainer for \"346d9ff1a432534df434e712d633fdef7e2b3a69a0ea93c81fff992d00515e14\" returns successfully" Nov 8 00:01:33.678455 containerd[1567]: time="2025-11-08T00:01:33.678224484Z" level=info msg="StartContainer for \"13804b17fa851637030ceb024319f0bf659c041b4edbb87eff17d0c4ae3a6cd2\" returns successfully" Nov 8 00:01:33.684354 containerd[1567]: time="2025-11-08T00:01:33.684306783Z" level=info msg="StartContainer for \"f70fddf1cb8c1b16407fb54df71bffbf2ae43d8602fadf3e533462c0a5051550\" returns successfully" Nov 8 00:01:33.725832 kubelet[2340]: W1108 00:01:33.725717 2340 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.84:6443: connect: connection refused Nov 8 00:01:33.725832 kubelet[2340]: E1108 00:01:33.725790 2340 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.84:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.84:6443: connect: connection refused" logger="UnhandledError" Nov 8 00:01:33.733924 kubelet[2340]: E1108 00:01:33.733869 2340 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.84:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.84:6443: connect: connection refused" interval="1.6s" Nov 8 00:01:34.168068 kubelet[2340]: I1108 00:01:34.167456 2340 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:01:34.361089 kubelet[2340]: E1108 00:01:34.360883 2340 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:01:34.361089 kubelet[2340]: E1108 00:01:34.361005 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:34.364072 kubelet[2340]: E1108 00:01:34.364034 2340 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:01:34.364215 kubelet[2340]: E1108 00:01:34.364195 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:34.367892 kubelet[2340]: E1108 00:01:34.367870 2340 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:01:34.368016 kubelet[2340]: E1108 00:01:34.367998 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:35.374612 kubelet[2340]: E1108 00:01:35.374575 2340 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:01:35.374957 kubelet[2340]: E1108 00:01:35.374930 2340 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 8 00:01:35.375078 kubelet[2340]: E1108 00:01:35.375049 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:35.376184 kubelet[2340]: E1108 00:01:35.376164 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:35.446811 kubelet[2340]: E1108 00:01:35.446757 2340 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 8 00:01:35.519713 kubelet[2340]: I1108 00:01:35.519677 2340 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:01:35.529879 kubelet[2340]: I1108 00:01:35.529846 2340 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:01:35.635531 kubelet[2340]: E1108 00:01:35.635427 2340 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 8 00:01:35.635531 kubelet[2340]: I1108 00:01:35.635462 2340 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:01:35.637792 kubelet[2340]: E1108 00:01:35.637573 2340 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:01:35.637792 kubelet[2340]: I1108 00:01:35.637603 2340 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:01:35.639278 kubelet[2340]: E1108 00:01:35.639252 2340 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 8 00:01:36.319737 kubelet[2340]: I1108 00:01:36.319700 2340 apiserver.go:52] "Watching apiserver" Nov 8 00:01:36.330539 kubelet[2340]: I1108 00:01:36.330507 2340 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:01:37.083237 kubelet[2340]: I1108 00:01:37.083047 2340 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:01:37.089254 kubelet[2340]: E1108 00:01:37.089231 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:37.377243 kubelet[2340]: E1108 00:01:37.377092 2340 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:37.510377 systemd[1]: Reload requested from client PID 2616 ('systemctl') (unit session-7.scope)... Nov 8 00:01:37.510392 systemd[1]: Reloading... Nov 8 00:01:37.572087 zram_generator::config[2660]: No configuration found. Nov 8 00:01:37.744347 systemd[1]: Reloading finished in 233 ms. Nov 8 00:01:37.771585 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:01:37.792937 systemd[1]: kubelet.service: Deactivated successfully. Nov 8 00:01:37.793239 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:01:37.793299 systemd[1]: kubelet.service: Consumed 1.158s CPU time, 131.6M memory peak. Nov 8 00:01:37.795155 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 8 00:01:37.944995 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 8 00:01:37.957315 (kubelet)[2702]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 8 00:01:37.989274 kubelet[2702]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:01:37.989274 kubelet[2702]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 8 00:01:37.989274 kubelet[2702]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 8 00:01:37.989601 kubelet[2702]: I1108 00:01:37.989333 2702 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 8 00:01:37.996763 kubelet[2702]: I1108 00:01:37.996656 2702 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 8 00:01:37.996763 kubelet[2702]: I1108 00:01:37.996686 2702 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 8 00:01:37.997031 kubelet[2702]: I1108 00:01:37.996995 2702 server.go:954] "Client rotation is on, will bootstrap in background" Nov 8 00:01:37.999110 kubelet[2702]: I1108 00:01:37.999090 2702 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 8 00:01:38.001967 kubelet[2702]: I1108 00:01:38.001882 2702 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 8 00:01:38.005276 kubelet[2702]: I1108 00:01:38.005256 2702 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 8 00:01:38.008748 kubelet[2702]: I1108 00:01:38.008710 2702 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 8 00:01:38.008932 kubelet[2702]: I1108 00:01:38.008905 2702 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 8 00:01:38.009101 kubelet[2702]: I1108 00:01:38.008931 2702 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 8 00:01:38.009186 kubelet[2702]: I1108 00:01:38.009114 2702 topology_manager.go:138] "Creating topology manager with none policy" Nov 8 00:01:38.009186 kubelet[2702]: I1108 00:01:38.009123 2702 container_manager_linux.go:304] "Creating device plugin manager" Nov 8 00:01:38.009186 kubelet[2702]: I1108 00:01:38.009162 2702 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:01:38.009299 kubelet[2702]: I1108 00:01:38.009285 2702 kubelet.go:446] "Attempting to sync node with API server" Nov 8 00:01:38.009326 kubelet[2702]: I1108 00:01:38.009306 2702 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 8 00:01:38.009345 kubelet[2702]: I1108 00:01:38.009330 2702 kubelet.go:352] "Adding apiserver pod source" Nov 8 00:01:38.009345 kubelet[2702]: I1108 00:01:38.009344 2702 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 8 00:01:38.012095 kubelet[2702]: I1108 00:01:38.009823 2702 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Nov 8 00:01:38.012095 kubelet[2702]: I1108 00:01:38.010261 2702 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 8 00:01:38.012095 kubelet[2702]: I1108 00:01:38.010612 2702 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 8 00:01:38.012095 kubelet[2702]: I1108 00:01:38.010634 2702 server.go:1287] "Started kubelet" Nov 8 00:01:38.012095 kubelet[2702]: I1108 00:01:38.010893 2702 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 8 00:01:38.012095 kubelet[2702]: I1108 00:01:38.011836 2702 server.go:479] "Adding debug handlers to kubelet server" Nov 8 00:01:38.012698 kubelet[2702]: I1108 00:01:38.012644 2702 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 8 00:01:38.012853 kubelet[2702]: I1108 00:01:38.012833 2702 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 8 00:01:38.014881 kubelet[2702]: I1108 00:01:38.014852 2702 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 8 00:01:38.017475 kubelet[2702]: I1108 00:01:38.017451 2702 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 8 00:01:38.019936 kubelet[2702]: E1108 00:01:38.018599 2702 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 8 00:01:38.019936 kubelet[2702]: I1108 00:01:38.018636 2702 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 8 00:01:38.019936 kubelet[2702]: I1108 00:01:38.018792 2702 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 8 00:01:38.019936 kubelet[2702]: I1108 00:01:38.018898 2702 reconciler.go:26] "Reconciler: start to sync state" Nov 8 00:01:38.026182 kubelet[2702]: I1108 00:01:38.026150 2702 factory.go:221] Registration of the systemd container factory successfully Nov 8 00:01:38.026266 kubelet[2702]: I1108 00:01:38.026256 2702 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 8 00:01:38.026728 kubelet[2702]: I1108 00:01:38.026697 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 8 00:01:38.028061 kubelet[2702]: I1108 00:01:38.028029 2702 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 8 00:01:38.028149 kubelet[2702]: I1108 00:01:38.028099 2702 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 8 00:01:38.028149 kubelet[2702]: I1108 00:01:38.028126 2702 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 8 00:01:38.028149 kubelet[2702]: I1108 00:01:38.028136 2702 kubelet.go:2382] "Starting kubelet main sync loop" Nov 8 00:01:38.028224 kubelet[2702]: E1108 00:01:38.028183 2702 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 8 00:01:38.038882 kubelet[2702]: E1108 00:01:38.038813 2702 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 8 00:01:38.039557 kubelet[2702]: I1108 00:01:38.039531 2702 factory.go:221] Registration of the containerd container factory successfully Nov 8 00:01:38.082470 kubelet[2702]: I1108 00:01:38.082434 2702 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 8 00:01:38.082645 kubelet[2702]: I1108 00:01:38.082631 2702 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 8 00:01:38.082698 kubelet[2702]: I1108 00:01:38.082690 2702 state_mem.go:36] "Initialized new in-memory state store" Nov 8 00:01:38.082949 kubelet[2702]: I1108 00:01:38.082932 2702 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 8 00:01:38.083026 kubelet[2702]: I1108 00:01:38.083003 2702 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 8 00:01:38.083114 kubelet[2702]: I1108 00:01:38.083103 2702 policy_none.go:49] "None policy: Start" Nov 8 00:01:38.083175 kubelet[2702]: I1108 00:01:38.083166 2702 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 8 00:01:38.083240 kubelet[2702]: I1108 00:01:38.083233 2702 state_mem.go:35] "Initializing new in-memory state store" Nov 8 00:01:38.083401 kubelet[2702]: I1108 00:01:38.083386 2702 state_mem.go:75] "Updated machine memory state" Nov 8 00:01:38.088435 kubelet[2702]: I1108 00:01:38.087452 2702 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 8 00:01:38.088435 kubelet[2702]: I1108 00:01:38.087629 2702 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 8 00:01:38.088435 kubelet[2702]: I1108 00:01:38.087642 2702 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 8 00:01:38.088435 kubelet[2702]: I1108 00:01:38.087886 2702 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 8 00:01:38.090197 kubelet[2702]: E1108 00:01:38.090126 2702 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 8 00:01:38.129340 kubelet[2702]: I1108 00:01:38.129292 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:01:38.129457 kubelet[2702]: I1108 00:01:38.129375 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:01:38.129457 kubelet[2702]: I1108 00:01:38.129306 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 8 00:01:38.145049 kubelet[2702]: E1108 00:01:38.144991 2702 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:01:38.191497 kubelet[2702]: I1108 00:01:38.191442 2702 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 8 00:01:38.213242 kubelet[2702]: I1108 00:01:38.213198 2702 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 8 00:01:38.213351 kubelet[2702]: I1108 00:01:38.213294 2702 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 8 00:01:38.219765 kubelet[2702]: I1108 00:01:38.219654 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d34ff169f03dcb9e2794895dab11a6e7-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d34ff169f03dcb9e2794895dab11a6e7\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:01:38.219765 kubelet[2702]: I1108 00:01:38.219707 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:01:38.219765 kubelet[2702]: I1108 00:01:38.219729 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:01:38.219765 kubelet[2702]: I1108 00:01:38.219753 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 8 00:01:38.219765 kubelet[2702]: I1108 00:01:38.219771 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d34ff169f03dcb9e2794895dab11a6e7-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d34ff169f03dcb9e2794895dab11a6e7\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:01:38.219999 kubelet[2702]: I1108 00:01:38.219812 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:01:38.219999 kubelet[2702]: I1108 00:01:38.219846 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:01:38.219999 kubelet[2702]: I1108 00:01:38.219866 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 8 00:01:38.219999 kubelet[2702]: I1108 00:01:38.219893 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d34ff169f03dcb9e2794895dab11a6e7-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d34ff169f03dcb9e2794895dab11a6e7\") " pod="kube-system/kube-apiserver-localhost" Nov 8 00:01:38.439236 kubelet[2702]: E1108 00:01:38.439095 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:38.439236 kubelet[2702]: E1108 00:01:38.439153 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:38.446086 kubelet[2702]: E1108 00:01:38.446032 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:38.528960 sudo[2739]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 8 00:01:38.529450 sudo[2739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 8 00:01:38.865292 sudo[2739]: pam_unix(sudo:session): session closed for user root Nov 8 00:01:39.010408 kubelet[2702]: I1108 00:01:39.010361 2702 apiserver.go:52] "Watching apiserver" Nov 8 00:01:39.019462 kubelet[2702]: I1108 00:01:39.019432 2702 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 8 00:01:39.059099 kubelet[2702]: I1108 00:01:39.059069 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 8 00:01:39.059463 kubelet[2702]: I1108 00:01:39.059249 2702 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 8 00:01:39.059463 kubelet[2702]: E1108 00:01:39.059292 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:39.104175 kubelet[2702]: E1108 00:01:39.103936 2702 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 8 00:01:39.104350 kubelet[2702]: E1108 00:01:39.104324 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:39.104427 kubelet[2702]: E1108 00:01:39.104326 2702 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 8 00:01:39.104656 kubelet[2702]: E1108 00:01:39.104621 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:39.139548 kubelet[2702]: I1108 00:01:39.139410 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.139391206 podStartE2EDuration="2.139391206s" podCreationTimestamp="2025-11-08 00:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:01:39.126954981 +0000 UTC m=+1.166840630" watchObservedRunningTime="2025-11-08 00:01:39.139391206 +0000 UTC m=+1.179276855" Nov 8 00:01:39.139667 kubelet[2702]: I1108 00:01:39.139546 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.13953202 podStartE2EDuration="1.13953202s" podCreationTimestamp="2025-11-08 00:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:01:39.138940337 +0000 UTC m=+1.178825986" watchObservedRunningTime="2025-11-08 00:01:39.13953202 +0000 UTC m=+1.179417669" Nov 8 00:01:39.148702 kubelet[2702]: I1108 00:01:39.148636 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.148618451 podStartE2EDuration="1.148618451s" podCreationTimestamp="2025-11-08 00:01:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:01:39.14830016 +0000 UTC m=+1.188185809" watchObservedRunningTime="2025-11-08 00:01:39.148618451 +0000 UTC m=+1.188504100" Nov 8 00:01:40.060768 kubelet[2702]: E1108 00:01:40.060581 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:40.060768 kubelet[2702]: E1108 00:01:40.060684 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:40.693918 sudo[1780]: pam_unix(sudo:session): session closed for user root Nov 8 00:01:40.698755 sshd[1779]: Connection closed by 10.0.0.1 port 53630 Nov 8 00:01:40.698332 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Nov 8 00:01:40.704249 systemd[1]: sshd@6-10.0.0.84:22-10.0.0.1:53630.service: Deactivated successfully. Nov 8 00:01:40.706729 systemd[1]: session-7.scope: Deactivated successfully. Nov 8 00:01:40.706950 systemd[1]: session-7.scope: Consumed 6.831s CPU time, 255.4M memory peak. Nov 8 00:01:40.707883 systemd-logind[1554]: Session 7 logged out. Waiting for processes to exit. Nov 8 00:01:40.709500 systemd-logind[1554]: Removed session 7. Nov 8 00:01:41.061962 kubelet[2702]: E1108 00:01:41.061936 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:42.867223 kubelet[2702]: I1108 00:01:42.867133 2702 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 8 00:01:42.867667 kubelet[2702]: I1108 00:01:42.867650 2702 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 8 00:01:42.867716 containerd[1567]: time="2025-11-08T00:01:42.867460331Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 8 00:01:43.532653 kubelet[2702]: E1108 00:01:43.532601 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:43.668919 systemd[1]: Created slice kubepods-besteffort-pode46b8b56_8fab_43bc_b60e_a22160a7eeea.slice - libcontainer container kubepods-besteffort-pode46b8b56_8fab_43bc_b60e_a22160a7eeea.slice. Nov 8 00:01:43.694506 systemd[1]: Created slice kubepods-burstable-pod51d8a772_7cc1_4b91_bb29_ac0b7fb2a39b.slice - libcontainer container kubepods-burstable-pod51d8a772_7cc1_4b91_bb29_ac0b7fb2a39b.slice. Nov 8 00:01:43.754043 kubelet[2702]: I1108 00:01:43.754010 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e46b8b56-8fab-43bc-b60e-a22160a7eeea-xtables-lock\") pod \"kube-proxy-z8v24\" (UID: \"e46b8b56-8fab-43bc-b60e-a22160a7eeea\") " pod="kube-system/kube-proxy-z8v24" Nov 8 00:01:43.754289 kubelet[2702]: I1108 00:01:43.754255 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvnj5\" (UniqueName: \"kubernetes.io/projected/e46b8b56-8fab-43bc-b60e-a22160a7eeea-kube-api-access-cvnj5\") pod \"kube-proxy-z8v24\" (UID: \"e46b8b56-8fab-43bc-b60e-a22160a7eeea\") " pod="kube-system/kube-proxy-z8v24" Nov 8 00:01:43.754368 kubelet[2702]: I1108 00:01:43.754326 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e46b8b56-8fab-43bc-b60e-a22160a7eeea-kube-proxy\") pod \"kube-proxy-z8v24\" (UID: \"e46b8b56-8fab-43bc-b60e-a22160a7eeea\") " pod="kube-system/kube-proxy-z8v24" Nov 8 00:01:43.754402 kubelet[2702]: I1108 00:01:43.754380 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e46b8b56-8fab-43bc-b60e-a22160a7eeea-lib-modules\") pod \"kube-proxy-z8v24\" (UID: \"e46b8b56-8fab-43bc-b60e-a22160a7eeea\") " pod="kube-system/kube-proxy-z8v24" Nov 8 00:01:43.856004 kubelet[2702]: I1108 00:01:43.855362 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-clustermesh-secrets\") pod \"cilium-spgzz\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " pod="kube-system/cilium-spgzz" Nov 8 00:01:43.856004 kubelet[2702]: I1108 00:01:43.855404 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-host-proc-sys-kernel\") pod \"cilium-spgzz\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " pod="kube-system/cilium-spgzz" Nov 8 00:01:43.856004 kubelet[2702]: I1108 00:01:43.855445 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-lib-modules\") pod \"cilium-spgzz\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " pod="kube-system/cilium-spgzz" Nov 8 00:01:43.856004 kubelet[2702]: I1108 00:01:43.855460 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-hubble-tls\") pod \"cilium-spgzz\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " pod="kube-system/cilium-spgzz" Nov 8 00:01:43.856004 kubelet[2702]: I1108 00:01:43.855481 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7nqr\" (UniqueName: \"kubernetes.io/projected/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-kube-api-access-d7nqr\") pod \"cilium-spgzz\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " pod="kube-system/cilium-spgzz" Nov 8 00:01:43.856004 kubelet[2702]: I1108 00:01:43.855497 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cni-path\") pod \"cilium-spgzz\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " pod="kube-system/cilium-spgzz" Nov 8 00:01:43.856291 kubelet[2702]: I1108 00:01:43.855512 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cilium-config-path\") pod \"cilium-spgzz\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " pod="kube-system/cilium-spgzz" Nov 8 00:01:43.856291 kubelet[2702]: I1108 00:01:43.855537 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cilium-run\") pod \"cilium-spgzz\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " pod="kube-system/cilium-spgzz" Nov 8 00:01:43.856291 kubelet[2702]: I1108 00:01:43.855552 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cilium-cgroup\") pod \"cilium-spgzz\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " pod="kube-system/cilium-spgzz" Nov 8 00:01:43.856291 kubelet[2702]: I1108 00:01:43.855566 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-etc-cni-netd\") pod \"cilium-spgzz\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " pod="kube-system/cilium-spgzz" Nov 8 00:01:43.856291 kubelet[2702]: I1108 00:01:43.855581 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-bpf-maps\") pod \"cilium-spgzz\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " pod="kube-system/cilium-spgzz" Nov 8 00:01:43.856291 kubelet[2702]: I1108 00:01:43.855641 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-hostproc\") pod \"cilium-spgzz\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " pod="kube-system/cilium-spgzz" Nov 8 00:01:43.856425 kubelet[2702]: I1108 00:01:43.855665 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-xtables-lock\") pod \"cilium-spgzz\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " pod="kube-system/cilium-spgzz" Nov 8 00:01:43.856425 kubelet[2702]: I1108 00:01:43.855681 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-host-proc-sys-net\") pod \"cilium-spgzz\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " pod="kube-system/cilium-spgzz" Nov 8 00:01:44.032906 kubelet[2702]: E1108 00:01:44.032773 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:44.032906 kubelet[2702]: E1108 00:01:44.032806 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:44.035228 containerd[1567]: time="2025-11-08T00:01:44.035182738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z8v24,Uid:e46b8b56-8fab-43bc-b60e-a22160a7eeea,Namespace:kube-system,Attempt:0,}" Nov 8 00:01:44.038779 containerd[1567]: time="2025-11-08T00:01:44.036441504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-spgzz,Uid:51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b,Namespace:kube-system,Attempt:0,}" Nov 8 00:01:44.035426 systemd[1]: Created slice kubepods-besteffort-podaf99f6cc_07f5_48d9_b9a4_e9ead82d6ad9.slice - libcontainer container kubepods-besteffort-podaf99f6cc_07f5_48d9_b9a4_e9ead82d6ad9.slice. Nov 8 00:01:44.060669 containerd[1567]: time="2025-11-08T00:01:44.060603752Z" level=info msg="connecting to shim 0c41009e161883a1f5fa9c69842086b089b63e15e99a8db2f592e4bf4dc75d5d" address="unix:///run/containerd/s/bc44602783f289ff93d6cb8afb60a0e0ffa08e4e394dced4e4412d0389ca570e" namespace=k8s.io protocol=ttrpc version=3 Nov 8 00:01:44.070043 kubelet[2702]: E1108 00:01:44.069683 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:44.071255 containerd[1567]: time="2025-11-08T00:01:44.071213688Z" level=info msg="connecting to shim 2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547" address="unix:///run/containerd/s/9444b659f9ddef4d932a7320eebbdf8dfd324c57fa50979670a9c8f808da5c8a" namespace=k8s.io protocol=ttrpc version=3 Nov 8 00:01:44.089265 systemd[1]: Started cri-containerd-0c41009e161883a1f5fa9c69842086b089b63e15e99a8db2f592e4bf4dc75d5d.scope - libcontainer container 0c41009e161883a1f5fa9c69842086b089b63e15e99a8db2f592e4bf4dc75d5d. Nov 8 00:01:44.095427 systemd[1]: Started cri-containerd-2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547.scope - libcontainer container 2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547. Nov 8 00:01:44.129194 containerd[1567]: time="2025-11-08T00:01:44.128050989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z8v24,Uid:e46b8b56-8fab-43bc-b60e-a22160a7eeea,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c41009e161883a1f5fa9c69842086b089b63e15e99a8db2f592e4bf4dc75d5d\"" Nov 8 00:01:44.130265 kubelet[2702]: E1108 00:01:44.130213 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:44.134537 containerd[1567]: time="2025-11-08T00:01:44.134490743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-spgzz,Uid:51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\"" Nov 8 00:01:44.134644 containerd[1567]: time="2025-11-08T00:01:44.134509663Z" level=info msg="CreateContainer within sandbox \"0c41009e161883a1f5fa9c69842086b089b63e15e99a8db2f592e4bf4dc75d5d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 8 00:01:44.135384 kubelet[2702]: E1108 00:01:44.135359 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:44.136381 containerd[1567]: time="2025-11-08T00:01:44.136340033Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 8 00:01:44.158846 kubelet[2702]: I1108 00:01:44.158775 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bgmf\" (UniqueName: \"kubernetes.io/projected/af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9-kube-api-access-7bgmf\") pod \"cilium-operator-6c4d7847fc-4lpv6\" (UID: \"af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9\") " pod="kube-system/cilium-operator-6c4d7847fc-4lpv6" Nov 8 00:01:44.158846 kubelet[2702]: I1108 00:01:44.158826 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4lpv6\" (UID: \"af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9\") " pod="kube-system/cilium-operator-6c4d7847fc-4lpv6" Nov 8 00:01:44.170006 containerd[1567]: time="2025-11-08T00:01:44.169967690Z" level=info msg="Container b434d8c2f3f9bcf03e4eee0921eb471cc8834489862281f207d94872e08214db: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:01:44.178024 containerd[1567]: time="2025-11-08T00:01:44.177978933Z" level=info msg="CreateContainer within sandbox \"0c41009e161883a1f5fa9c69842086b089b63e15e99a8db2f592e4bf4dc75d5d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b434d8c2f3f9bcf03e4eee0921eb471cc8834489862281f207d94872e08214db\"" Nov 8 00:01:44.178851 containerd[1567]: time="2025-11-08T00:01:44.178789857Z" level=info msg="StartContainer for \"b434d8c2f3f9bcf03e4eee0921eb471cc8834489862281f207d94872e08214db\"" Nov 8 00:01:44.180191 containerd[1567]: time="2025-11-08T00:01:44.180165744Z" level=info msg="connecting to shim b434d8c2f3f9bcf03e4eee0921eb471cc8834489862281f207d94872e08214db" address="unix:///run/containerd/s/bc44602783f289ff93d6cb8afb60a0e0ffa08e4e394dced4e4412d0389ca570e" protocol=ttrpc version=3 Nov 8 00:01:44.208249 systemd[1]: Started cri-containerd-b434d8c2f3f9bcf03e4eee0921eb471cc8834489862281f207d94872e08214db.scope - libcontainer container b434d8c2f3f9bcf03e4eee0921eb471cc8834489862281f207d94872e08214db. Nov 8 00:01:44.299886 containerd[1567]: time="2025-11-08T00:01:44.299360815Z" level=info msg="StartContainer for \"b434d8c2f3f9bcf03e4eee0921eb471cc8834489862281f207d94872e08214db\" returns successfully" Nov 8 00:01:44.339985 kubelet[2702]: E1108 00:01:44.339526 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:44.340384 containerd[1567]: time="2025-11-08T00:01:44.340339951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4lpv6,Uid:af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9,Namespace:kube-system,Attempt:0,}" Nov 8 00:01:44.363631 containerd[1567]: time="2025-11-08T00:01:44.363580914Z" level=info msg="connecting to shim f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9" address="unix:///run/containerd/s/1b0cc9568381a37a9e8e64f118ba5b21e7d2c52e37ce761ef86a2403f3b8e822" namespace=k8s.io protocol=ttrpc version=3 Nov 8 00:01:44.395239 systemd[1]: Started cri-containerd-f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9.scope - libcontainer container f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9. Nov 8 00:01:44.437566 containerd[1567]: time="2025-11-08T00:01:44.437409225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4lpv6,Uid:af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9\"" Nov 8 00:01:44.438577 kubelet[2702]: E1108 00:01:44.438248 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:44.895647 kubelet[2702]: E1108 00:01:44.895290 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:45.078585 kubelet[2702]: E1108 00:01:45.078555 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:45.078908 kubelet[2702]: E1108 00:01:45.078613 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:45.078908 kubelet[2702]: E1108 00:01:45.078750 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:45.090842 kubelet[2702]: I1108 00:01:45.090212 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-z8v24" podStartSLOduration=2.090193252 podStartE2EDuration="2.090193252s" podCreationTimestamp="2025-11-08 00:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:01:45.090135931 +0000 UTC m=+7.130021580" watchObservedRunningTime="2025-11-08 00:01:45.090193252 +0000 UTC m=+7.130078901" Nov 8 00:01:47.134611 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2066730899.mount: Deactivated successfully. Nov 8 00:01:48.754510 kubelet[2702]: E1108 00:01:48.753381 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:48.826211 containerd[1567]: time="2025-11-08T00:01:48.826154162Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:48.827574 containerd[1567]: time="2025-11-08T00:01:48.827515968Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Nov 8 00:01:48.828869 containerd[1567]: time="2025-11-08T00:01:48.828838413Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:48.840385 containerd[1567]: time="2025-11-08T00:01:48.840315222Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.703921589s" Nov 8 00:01:48.840385 containerd[1567]: time="2025-11-08T00:01:48.840368902Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 8 00:01:48.846076 containerd[1567]: time="2025-11-08T00:01:48.846028606Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 8 00:01:48.850209 containerd[1567]: time="2025-11-08T00:01:48.850045543Z" level=info msg="CreateContainer within sandbox \"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 8 00:01:48.861089 containerd[1567]: time="2025-11-08T00:01:48.858541819Z" level=info msg="Container e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:01:48.866167 containerd[1567]: time="2025-11-08T00:01:48.866127411Z" level=info msg="CreateContainer within sandbox \"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1\"" Nov 8 00:01:48.866814 containerd[1567]: time="2025-11-08T00:01:48.866784134Z" level=info msg="StartContainer for \"e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1\"" Nov 8 00:01:48.868032 containerd[1567]: time="2025-11-08T00:01:48.868000939Z" level=info msg="connecting to shim e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1" address="unix:///run/containerd/s/9444b659f9ddef4d932a7320eebbdf8dfd324c57fa50979670a9c8f808da5c8a" protocol=ttrpc version=3 Nov 8 00:01:48.917281 systemd[1]: Started cri-containerd-e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1.scope - libcontainer container e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1. Nov 8 00:01:48.954250 containerd[1567]: time="2025-11-08T00:01:48.954199945Z" level=info msg="StartContainer for \"e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1\" returns successfully" Nov 8 00:01:48.965180 systemd[1]: cri-containerd-e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1.scope: Deactivated successfully. Nov 8 00:01:49.002485 containerd[1567]: time="2025-11-08T00:01:49.002410909Z" level=info msg="received container exit event container_id:\"e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1\" id:\"e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1\" pid:3130 exited_at:{seconds:1762560108 nanos:996921246}" Nov 8 00:01:49.048625 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1-rootfs.mount: Deactivated successfully. Nov 8 00:01:49.124597 kubelet[2702]: E1108 00:01:49.124551 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:49.124805 kubelet[2702]: E1108 00:01:49.124615 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:50.116954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount659316530.mount: Deactivated successfully. Nov 8 00:01:50.129081 kubelet[2702]: E1108 00:01:50.128382 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:50.136082 containerd[1567]: time="2025-11-08T00:01:50.134901037Z" level=info msg="CreateContainer within sandbox \"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 8 00:01:50.145464 containerd[1567]: time="2025-11-08T00:01:50.145141556Z" level=info msg="Container b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:01:50.157009 containerd[1567]: time="2025-11-08T00:01:50.156919761Z" level=info msg="CreateContainer within sandbox \"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70\"" Nov 8 00:01:50.157651 containerd[1567]: time="2025-11-08T00:01:50.157611403Z" level=info msg="StartContainer for \"b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70\"" Nov 8 00:01:50.158839 containerd[1567]: time="2025-11-08T00:01:50.158761368Z" level=info msg="connecting to shim b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70" address="unix:///run/containerd/s/9444b659f9ddef4d932a7320eebbdf8dfd324c57fa50979670a9c8f808da5c8a" protocol=ttrpc version=3 Nov 8 00:01:50.181243 systemd[1]: Started cri-containerd-b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70.scope - libcontainer container b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70. Nov 8 00:01:50.215895 containerd[1567]: time="2025-11-08T00:01:50.215840545Z" level=info msg="StartContainer for \"b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70\" returns successfully" Nov 8 00:01:50.228571 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 8 00:01:50.228809 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:01:50.228875 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:01:50.230358 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 8 00:01:50.231755 systemd[1]: cri-containerd-b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70.scope: Deactivated successfully. Nov 8 00:01:50.234922 containerd[1567]: time="2025-11-08T00:01:50.234683217Z" level=info msg="received container exit event container_id:\"b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70\" id:\"b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70\" pid:3188 exited_at:{seconds:1762560110 nanos:234158735}" Nov 8 00:01:50.290177 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 8 00:01:50.457256 containerd[1567]: time="2025-11-08T00:01:50.456850145Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:50.457546 containerd[1567]: time="2025-11-08T00:01:50.457514187Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Nov 8 00:01:50.463964 containerd[1567]: time="2025-11-08T00:01:50.463924132Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.617755845s" Nov 8 00:01:50.464153 containerd[1567]: time="2025-11-08T00:01:50.464097732Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 8 00:01:50.466380 containerd[1567]: time="2025-11-08T00:01:50.466293381Z" level=info msg="CreateContainer within sandbox \"f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 8 00:01:50.468430 containerd[1567]: time="2025-11-08T00:01:50.468378669Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 8 00:01:50.477178 containerd[1567]: time="2025-11-08T00:01:50.477125542Z" level=info msg="Container c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:01:50.489605 containerd[1567]: time="2025-11-08T00:01:50.489546069Z" level=info msg="CreateContainer within sandbox \"f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\"" Nov 8 00:01:50.490157 containerd[1567]: time="2025-11-08T00:01:50.490069231Z" level=info msg="StartContainer for \"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\"" Nov 8 00:01:50.491251 containerd[1567]: time="2025-11-08T00:01:50.491200996Z" level=info msg="connecting to shim c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5" address="unix:///run/containerd/s/1b0cc9568381a37a9e8e64f118ba5b21e7d2c52e37ce761ef86a2403f3b8e822" protocol=ttrpc version=3 Nov 8 00:01:50.521326 systemd[1]: Started cri-containerd-c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5.scope - libcontainer container c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5. Nov 8 00:01:50.547502 containerd[1567]: time="2025-11-08T00:01:50.547463450Z" level=info msg="StartContainer for \"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\" returns successfully" Nov 8 00:01:51.113132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70-rootfs.mount: Deactivated successfully. Nov 8 00:01:51.141426 kubelet[2702]: E1108 00:01:51.141380 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:51.144870 kubelet[2702]: E1108 00:01:51.144008 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:51.144976 containerd[1567]: time="2025-11-08T00:01:51.144631340Z" level=info msg="CreateContainer within sandbox \"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 8 00:01:51.164124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1532577569.mount: Deactivated successfully. Nov 8 00:01:51.191203 containerd[1567]: time="2025-11-08T00:01:51.190415906Z" level=info msg="Container adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:01:51.201837 containerd[1567]: time="2025-11-08T00:01:51.201780667Z" level=info msg="CreateContainer within sandbox \"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076\"" Nov 8 00:01:51.202714 containerd[1567]: time="2025-11-08T00:01:51.202659870Z" level=info msg="StartContainer for \"adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076\"" Nov 8 00:01:51.204422 containerd[1567]: time="2025-11-08T00:01:51.204385556Z" level=info msg="connecting to shim adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076" address="unix:///run/containerd/s/9444b659f9ddef4d932a7320eebbdf8dfd324c57fa50979670a9c8f808da5c8a" protocol=ttrpc version=3 Nov 8 00:01:51.234249 systemd[1]: Started cri-containerd-adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076.scope - libcontainer container adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076. Nov 8 00:01:51.339021 containerd[1567]: time="2025-11-08T00:01:51.338981604Z" level=info msg="StartContainer for \"adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076\" returns successfully" Nov 8 00:01:51.340237 systemd[1]: cri-containerd-adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076.scope: Deactivated successfully. Nov 8 00:01:51.342572 containerd[1567]: time="2025-11-08T00:01:51.342523336Z" level=info msg="received container exit event container_id:\"adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076\" id:\"adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076\" pid:3275 exited_at:{seconds:1762560111 nanos:342303376}" Nov 8 00:01:52.112187 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076-rootfs.mount: Deactivated successfully. Nov 8 00:01:52.148792 kubelet[2702]: E1108 00:01:52.148577 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:52.148792 kubelet[2702]: E1108 00:01:52.148633 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:52.153029 containerd[1567]: time="2025-11-08T00:01:52.151837998Z" level=info msg="CreateContainer within sandbox \"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 8 00:01:52.164360 containerd[1567]: time="2025-11-08T00:01:52.164138400Z" level=info msg="Container b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:01:52.174962 kubelet[2702]: I1108 00:01:52.174900 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4lpv6" podStartSLOduration=3.148875014 podStartE2EDuration="9.174884117s" podCreationTimestamp="2025-11-08 00:01:43 +0000 UTC" firstStartedPulling="2025-11-08 00:01:44.438758752 +0000 UTC m=+6.478644401" lastFinishedPulling="2025-11-08 00:01:50.464767855 +0000 UTC m=+12.504653504" observedRunningTime="2025-11-08 00:01:51.193259076 +0000 UTC m=+13.233145125" watchObservedRunningTime="2025-11-08 00:01:52.174884117 +0000 UTC m=+14.214769766" Nov 8 00:01:52.175725 containerd[1567]: time="2025-11-08T00:01:52.175689400Z" level=info msg="CreateContainer within sandbox \"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380\"" Nov 8 00:01:52.177176 containerd[1567]: time="2025-11-08T00:01:52.177128525Z" level=info msg="StartContainer for \"b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380\"" Nov 8 00:01:52.178591 containerd[1567]: time="2025-11-08T00:01:52.178560010Z" level=info msg="connecting to shim b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380" address="unix:///run/containerd/s/9444b659f9ddef4d932a7320eebbdf8dfd324c57fa50979670a9c8f808da5c8a" protocol=ttrpc version=3 Nov 8 00:01:52.201892 systemd[1]: Started cri-containerd-b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380.scope - libcontainer container b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380. Nov 8 00:01:52.232495 systemd[1]: cri-containerd-b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380.scope: Deactivated successfully. Nov 8 00:01:52.237149 containerd[1567]: time="2025-11-08T00:01:52.235875687Z" level=info msg="received container exit event container_id:\"b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380\" id:\"b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380\" pid:3315 exited_at:{seconds:1762560112 nanos:233388758}" Nov 8 00:01:52.237560 containerd[1567]: time="2025-11-08T00:01:52.237513933Z" level=info msg="StartContainer for \"b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380\" returns successfully" Nov 8 00:01:53.112259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380-rootfs.mount: Deactivated successfully. Nov 8 00:01:53.155374 kubelet[2702]: E1108 00:01:53.155321 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:53.157474 containerd[1567]: time="2025-11-08T00:01:53.157428268Z" level=info msg="CreateContainer within sandbox \"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 8 00:01:53.178685 containerd[1567]: time="2025-11-08T00:01:53.178311816Z" level=info msg="Container 549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:01:53.183873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3407563514.mount: Deactivated successfully. Nov 8 00:01:53.187506 containerd[1567]: time="2025-11-08T00:01:53.187441246Z" level=info msg="CreateContainer within sandbox \"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\"" Nov 8 00:01:53.190242 containerd[1567]: time="2025-11-08T00:01:53.190188175Z" level=info msg="StartContainer for \"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\"" Nov 8 00:01:53.191345 containerd[1567]: time="2025-11-08T00:01:53.191317059Z" level=info msg="connecting to shim 549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb" address="unix:///run/containerd/s/9444b659f9ddef4d932a7320eebbdf8dfd324c57fa50979670a9c8f808da5c8a" protocol=ttrpc version=3 Nov 8 00:01:53.215289 systemd[1]: Started cri-containerd-549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb.scope - libcontainer container 549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb. Nov 8 00:01:53.265412 containerd[1567]: time="2025-11-08T00:01:53.264457817Z" level=info msg="StartContainer for \"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\" returns successfully" Nov 8 00:01:53.387256 kubelet[2702]: I1108 00:01:53.386810 2702 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 8 00:01:53.441516 systemd[1]: Created slice kubepods-burstable-pod0e0fb20e_58f1_48b4_93f5_6d9806a55e42.slice - libcontainer container kubepods-burstable-pod0e0fb20e_58f1_48b4_93f5_6d9806a55e42.slice. Nov 8 00:01:53.455079 systemd[1]: Created slice kubepods-burstable-pod488cf906_818b_4f1d_b381_8d7c5d6da87a.slice - libcontainer container kubepods-burstable-pod488cf906_818b_4f1d_b381_8d7c5d6da87a.slice. Nov 8 00:01:53.527049 kubelet[2702]: I1108 00:01:53.526940 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8z75\" (UniqueName: \"kubernetes.io/projected/0e0fb20e-58f1-48b4-93f5-6d9806a55e42-kube-api-access-k8z75\") pod \"coredns-668d6bf9bc-8hv9m\" (UID: \"0e0fb20e-58f1-48b4-93f5-6d9806a55e42\") " pod="kube-system/coredns-668d6bf9bc-8hv9m" Nov 8 00:01:53.527049 kubelet[2702]: I1108 00:01:53.526985 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0e0fb20e-58f1-48b4-93f5-6d9806a55e42-config-volume\") pod \"coredns-668d6bf9bc-8hv9m\" (UID: \"0e0fb20e-58f1-48b4-93f5-6d9806a55e42\") " pod="kube-system/coredns-668d6bf9bc-8hv9m" Nov 8 00:01:53.527049 kubelet[2702]: I1108 00:01:53.527012 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kh4p\" (UniqueName: \"kubernetes.io/projected/488cf906-818b-4f1d-b381-8d7c5d6da87a-kube-api-access-2kh4p\") pod \"coredns-668d6bf9bc-8mwh2\" (UID: \"488cf906-818b-4f1d-b381-8d7c5d6da87a\") " pod="kube-system/coredns-668d6bf9bc-8mwh2" Nov 8 00:01:53.527049 kubelet[2702]: I1108 00:01:53.527030 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/488cf906-818b-4f1d-b381-8d7c5d6da87a-config-volume\") pod \"coredns-668d6bf9bc-8mwh2\" (UID: \"488cf906-818b-4f1d-b381-8d7c5d6da87a\") " pod="kube-system/coredns-668d6bf9bc-8mwh2" Nov 8 00:01:53.755634 kubelet[2702]: E1108 00:01:53.755500 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:53.756528 containerd[1567]: time="2025-11-08T00:01:53.756496224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8hv9m,Uid:0e0fb20e-58f1-48b4-93f5-6d9806a55e42,Namespace:kube-system,Attempt:0,}" Nov 8 00:01:53.758798 kubelet[2702]: E1108 00:01:53.758578 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:53.758984 containerd[1567]: time="2025-11-08T00:01:53.758952832Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8mwh2,Uid:488cf906-818b-4f1d-b381-8d7c5d6da87a,Namespace:kube-system,Attempt:0,}" Nov 8 00:01:54.167838 kubelet[2702]: E1108 00:01:54.167708 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:54.192775 kubelet[2702]: I1108 00:01:54.192457 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-spgzz" podStartSLOduration=6.482686323 podStartE2EDuration="11.192439457s" podCreationTimestamp="2025-11-08 00:01:43 +0000 UTC" firstStartedPulling="2025-11-08 00:01:44.13581999 +0000 UTC m=+6.175705599" lastFinishedPulling="2025-11-08 00:01:48.845573084 +0000 UTC m=+10.885458733" observedRunningTime="2025-11-08 00:01:54.190891212 +0000 UTC m=+16.230776861" watchObservedRunningTime="2025-11-08 00:01:54.192439457 +0000 UTC m=+16.232325106" Nov 8 00:01:54.845786 update_engine[1555]: I20251108 00:01:54.845706 1555 update_attempter.cc:509] Updating boot flags... Nov 8 00:01:55.170138 kubelet[2702]: E1108 00:01:55.169262 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:55.366732 systemd-networkd[1473]: cilium_host: Link UP Nov 8 00:01:55.366970 systemd-networkd[1473]: cilium_net: Link UP Nov 8 00:01:55.367200 systemd-networkd[1473]: cilium_host: Gained carrier Nov 8 00:01:55.367460 systemd-networkd[1473]: cilium_net: Gained carrier Nov 8 00:01:55.451382 systemd-networkd[1473]: cilium_vxlan: Link UP Nov 8 00:01:55.451389 systemd-networkd[1473]: cilium_vxlan: Gained carrier Nov 8 00:01:55.587249 systemd-networkd[1473]: cilium_host: Gained IPv6LL Nov 8 00:01:55.667191 systemd-networkd[1473]: cilium_net: Gained IPv6LL Nov 8 00:01:55.736095 kernel: NET: Registered PF_ALG protocol family Nov 8 00:01:56.171460 kubelet[2702]: E1108 00:01:56.171225 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:56.340933 systemd-networkd[1473]: lxc_health: Link UP Nov 8 00:01:56.341700 systemd-networkd[1473]: lxc_health: Gained carrier Nov 8 00:01:56.836090 kernel: eth0: renamed from tmp3b6ea Nov 8 00:01:56.836182 kernel: eth0: renamed from tmpca33f Nov 8 00:01:56.842307 systemd-networkd[1473]: lxc3307f5d57403: Link UP Nov 8 00:01:56.849724 systemd-networkd[1473]: lxc8629cbfa25cd: Link UP Nov 8 00:01:56.850592 systemd-networkd[1473]: lxc3307f5d57403: Gained carrier Nov 8 00:01:56.850736 systemd-networkd[1473]: lxc8629cbfa25cd: Gained carrier Nov 8 00:01:57.259369 systemd-networkd[1473]: cilium_vxlan: Gained IPv6LL Nov 8 00:01:58.037863 kubelet[2702]: E1108 00:01:58.037823 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:58.174573 kubelet[2702]: E1108 00:01:58.174530 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:01:58.283230 systemd-networkd[1473]: lxc_health: Gained IPv6LL Nov 8 00:01:58.475606 systemd-networkd[1473]: lxc8629cbfa25cd: Gained IPv6LL Nov 8 00:01:58.539234 systemd-networkd[1473]: lxc3307f5d57403: Gained IPv6LL Nov 8 00:01:59.176552 kubelet[2702]: E1108 00:01:59.176504 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:00.711759 containerd[1567]: time="2025-11-08T00:02:00.711710951Z" level=info msg="connecting to shim 3b6ea9404c7808d077866e9b5d3ff1e4184fd5cd56c52efc96a3c930b7e799db" address="unix:///run/containerd/s/3f8a60c2fdaf62799eddebea26d928ca3f0f067b8141c6903d6d366f19fd4694" namespace=k8s.io protocol=ttrpc version=3 Nov 8 00:02:00.712296 containerd[1567]: time="2025-11-08T00:02:00.712267992Z" level=info msg="connecting to shim ca33f7b59440c2f575d8b9ca97144ab0587e8e1f8dd0810ecf037367ac633c3c" address="unix:///run/containerd/s/80150ed74d4833ba8142b948cf7429576eeb241c21ddd4c8b1ff18f2292fb65e" namespace=k8s.io protocol=ttrpc version=3 Nov 8 00:02:00.741774 systemd[1]: Started cri-containerd-3b6ea9404c7808d077866e9b5d3ff1e4184fd5cd56c52efc96a3c930b7e799db.scope - libcontainer container 3b6ea9404c7808d077866e9b5d3ff1e4184fd5cd56c52efc96a3c930b7e799db. Nov 8 00:02:00.743190 systemd[1]: Started cri-containerd-ca33f7b59440c2f575d8b9ca97144ab0587e8e1f8dd0810ecf037367ac633c3c.scope - libcontainer container ca33f7b59440c2f575d8b9ca97144ab0587e8e1f8dd0810ecf037367ac633c3c. Nov 8 00:02:00.757719 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:02:00.760654 systemd-resolved[1278]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 8 00:02:00.785043 containerd[1567]: time="2025-11-08T00:02:00.784894642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8mwh2,Uid:488cf906-818b-4f1d-b381-8d7c5d6da87a,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca33f7b59440c2f575d8b9ca97144ab0587e8e1f8dd0810ecf037367ac633c3c\"" Nov 8 00:02:00.786710 containerd[1567]: time="2025-11-08T00:02:00.786662406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8hv9m,Uid:0e0fb20e-58f1-48b4-93f5-6d9806a55e42,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b6ea9404c7808d077866e9b5d3ff1e4184fd5cd56c52efc96a3c930b7e799db\"" Nov 8 00:02:00.787008 kubelet[2702]: E1108 00:02:00.786973 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:00.787770 kubelet[2702]: E1108 00:02:00.787749 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:00.789324 containerd[1567]: time="2025-11-08T00:02:00.789115452Z" level=info msg="CreateContainer within sandbox \"ca33f7b59440c2f575d8b9ca97144ab0587e8e1f8dd0810ecf037367ac633c3c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:02:00.791276 containerd[1567]: time="2025-11-08T00:02:00.791246337Z" level=info msg="CreateContainer within sandbox \"3b6ea9404c7808d077866e9b5d3ff1e4184fd5cd56c52efc96a3c930b7e799db\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 8 00:02:00.800700 containerd[1567]: time="2025-11-08T00:02:00.800372318Z" level=info msg="Container a8cfedfdc92d4cf2b8bba2159dc0ff5ee26aab55837ee8d4f6fa59fe3e111697: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:02:00.803538 containerd[1567]: time="2025-11-08T00:02:00.803507045Z" level=info msg="Container 22e56bf91231b1d0a32ebd1700f41622387f5ca888a7cdc5ce41368566f92790: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:02:00.807836 containerd[1567]: time="2025-11-08T00:02:00.807806215Z" level=info msg="CreateContainer within sandbox \"ca33f7b59440c2f575d8b9ca97144ab0587e8e1f8dd0810ecf037367ac633c3c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8cfedfdc92d4cf2b8bba2159dc0ff5ee26aab55837ee8d4f6fa59fe3e111697\"" Nov 8 00:02:00.809882 containerd[1567]: time="2025-11-08T00:02:00.809851860Z" level=info msg="StartContainer for \"a8cfedfdc92d4cf2b8bba2159dc0ff5ee26aab55837ee8d4f6fa59fe3e111697\"" Nov 8 00:02:00.810579 containerd[1567]: time="2025-11-08T00:02:00.810547182Z" level=info msg="CreateContainer within sandbox \"3b6ea9404c7808d077866e9b5d3ff1e4184fd5cd56c52efc96a3c930b7e799db\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22e56bf91231b1d0a32ebd1700f41622387f5ca888a7cdc5ce41368566f92790\"" Nov 8 00:02:00.810973 containerd[1567]: time="2025-11-08T00:02:00.810947303Z" level=info msg="StartContainer for \"22e56bf91231b1d0a32ebd1700f41622387f5ca888a7cdc5ce41368566f92790\"" Nov 8 00:02:00.812186 containerd[1567]: time="2025-11-08T00:02:00.812142145Z" level=info msg="connecting to shim 22e56bf91231b1d0a32ebd1700f41622387f5ca888a7cdc5ce41368566f92790" address="unix:///run/containerd/s/3f8a60c2fdaf62799eddebea26d928ca3f0f067b8141c6903d6d366f19fd4694" protocol=ttrpc version=3 Nov 8 00:02:00.812367 containerd[1567]: time="2025-11-08T00:02:00.812216706Z" level=info msg="connecting to shim a8cfedfdc92d4cf2b8bba2159dc0ff5ee26aab55837ee8d4f6fa59fe3e111697" address="unix:///run/containerd/s/80150ed74d4833ba8142b948cf7429576eeb241c21ddd4c8b1ff18f2292fb65e" protocol=ttrpc version=3 Nov 8 00:02:00.834246 systemd[1]: Started cri-containerd-22e56bf91231b1d0a32ebd1700f41622387f5ca888a7cdc5ce41368566f92790.scope - libcontainer container 22e56bf91231b1d0a32ebd1700f41622387f5ca888a7cdc5ce41368566f92790. Nov 8 00:02:00.835662 systemd[1]: Started cri-containerd-a8cfedfdc92d4cf2b8bba2159dc0ff5ee26aab55837ee8d4f6fa59fe3e111697.scope - libcontainer container a8cfedfdc92d4cf2b8bba2159dc0ff5ee26aab55837ee8d4f6fa59fe3e111697. Nov 8 00:02:00.868414 containerd[1567]: time="2025-11-08T00:02:00.868300876Z" level=info msg="StartContainer for \"a8cfedfdc92d4cf2b8bba2159dc0ff5ee26aab55837ee8d4f6fa59fe3e111697\" returns successfully" Nov 8 00:02:00.868561 containerd[1567]: time="2025-11-08T00:02:00.868436997Z" level=info msg="StartContainer for \"22e56bf91231b1d0a32ebd1700f41622387f5ca888a7cdc5ce41368566f92790\" returns successfully" Nov 8 00:02:01.186400 kubelet[2702]: E1108 00:02:01.186293 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:01.187954 kubelet[2702]: E1108 00:02:01.187936 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:01.199046 kubelet[2702]: I1108 00:02:01.198950 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8mwh2" podStartSLOduration=18.198935068 podStartE2EDuration="18.198935068s" podCreationTimestamp="2025-11-08 00:01:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:02:01.198768707 +0000 UTC m=+23.238654356" watchObservedRunningTime="2025-11-08 00:02:01.198935068 +0000 UTC m=+23.238820717" Nov 8 00:02:01.214423 kubelet[2702]: I1108 00:02:01.214184 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8hv9m" podStartSLOduration=17.214165582 podStartE2EDuration="17.214165582s" podCreationTimestamp="2025-11-08 00:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:02:01.209396611 +0000 UTC m=+23.249282260" watchObservedRunningTime="2025-11-08 00:02:01.214165582 +0000 UTC m=+23.254051231" Nov 8 00:02:01.691209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2134923560.mount: Deactivated successfully. Nov 8 00:02:02.188537 kubelet[2702]: E1108 00:02:02.188498 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:04.556492 systemd[1]: Started sshd@7-10.0.0.84:22-10.0.0.1:33636.service - OpenSSH per-connection server daemon (10.0.0.1:33636). Nov 8 00:02:04.634729 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 33636 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:04.636230 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:04.641344 systemd-logind[1554]: New session 8 of user core. Nov 8 00:02:04.655288 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 8 00:02:04.790653 sshd[4050]: Connection closed by 10.0.0.1 port 33636 Nov 8 00:02:04.791501 sshd-session[4047]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:04.794917 systemd[1]: sshd@7-10.0.0.84:22-10.0.0.1:33636.service: Deactivated successfully. Nov 8 00:02:04.796790 systemd[1]: session-8.scope: Deactivated successfully. Nov 8 00:02:04.800240 systemd-logind[1554]: Session 8 logged out. Waiting for processes to exit. Nov 8 00:02:04.801573 systemd-logind[1554]: Removed session 8. Nov 8 00:02:09.805831 systemd[1]: Started sshd@8-10.0.0.84:22-10.0.0.1:39558.service - OpenSSH per-connection server daemon (10.0.0.1:39558). Nov 8 00:02:09.868845 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 39558 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:09.870281 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:09.876166 systemd-logind[1554]: New session 9 of user core. Nov 8 00:02:09.885987 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 8 00:02:10.014794 sshd[4072]: Connection closed by 10.0.0.1 port 39558 Nov 8 00:02:10.015165 sshd-session[4069]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:10.022749 systemd[1]: sshd@8-10.0.0.84:22-10.0.0.1:39558.service: Deactivated successfully. Nov 8 00:02:10.024921 systemd[1]: session-9.scope: Deactivated successfully. Nov 8 00:02:10.026163 systemd-logind[1554]: Session 9 logged out. Waiting for processes to exit. Nov 8 00:02:10.027340 systemd-logind[1554]: Removed session 9. Nov 8 00:02:11.185984 kubelet[2702]: E1108 00:02:11.185087 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:11.212786 kubelet[2702]: E1108 00:02:11.212755 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:15.032025 systemd[1]: Started sshd@9-10.0.0.84:22-10.0.0.1:39562.service - OpenSSH per-connection server daemon (10.0.0.1:39562). Nov 8 00:02:15.108188 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 39562 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:15.109951 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:15.116751 systemd-logind[1554]: New session 10 of user core. Nov 8 00:02:15.122346 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 8 00:02:15.252419 sshd[4096]: Connection closed by 10.0.0.1 port 39562 Nov 8 00:02:15.253294 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:15.257651 systemd[1]: sshd@9-10.0.0.84:22-10.0.0.1:39562.service: Deactivated successfully. Nov 8 00:02:15.259500 systemd[1]: session-10.scope: Deactivated successfully. Nov 8 00:02:15.260472 systemd-logind[1554]: Session 10 logged out. Waiting for processes to exit. Nov 8 00:02:15.261788 systemd-logind[1554]: Removed session 10. Nov 8 00:02:20.267963 systemd[1]: Started sshd@10-10.0.0.84:22-10.0.0.1:59770.service - OpenSSH per-connection server daemon (10.0.0.1:59770). Nov 8 00:02:20.338139 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 59770 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:20.339728 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:20.349026 systemd-logind[1554]: New session 11 of user core. Nov 8 00:02:20.364324 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 8 00:02:20.512688 sshd[4114]: Connection closed by 10.0.0.1 port 59770 Nov 8 00:02:20.513343 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:20.525955 systemd[1]: sshd@10-10.0.0.84:22-10.0.0.1:59770.service: Deactivated successfully. Nov 8 00:02:20.528253 systemd[1]: session-11.scope: Deactivated successfully. Nov 8 00:02:20.529781 systemd-logind[1554]: Session 11 logged out. Waiting for processes to exit. Nov 8 00:02:20.532024 systemd-logind[1554]: Removed session 11. Nov 8 00:02:20.534367 systemd[1]: Started sshd@11-10.0.0.84:22-10.0.0.1:59778.service - OpenSSH per-connection server daemon (10.0.0.1:59778). Nov 8 00:02:20.599643 sshd[4129]: Accepted publickey for core from 10.0.0.1 port 59778 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:20.601151 sshd-session[4129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:20.605788 systemd-logind[1554]: New session 12 of user core. Nov 8 00:02:20.621298 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 8 00:02:20.779867 sshd[4132]: Connection closed by 10.0.0.1 port 59778 Nov 8 00:02:20.780490 sshd-session[4129]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:20.793710 systemd[1]: sshd@11-10.0.0.84:22-10.0.0.1:59778.service: Deactivated successfully. Nov 8 00:02:20.796753 systemd[1]: session-12.scope: Deactivated successfully. Nov 8 00:02:20.797854 systemd-logind[1554]: Session 12 logged out. Waiting for processes to exit. Nov 8 00:02:20.801385 systemd[1]: Started sshd@12-10.0.0.84:22-10.0.0.1:59790.service - OpenSSH per-connection server daemon (10.0.0.1:59790). Nov 8 00:02:20.803001 systemd-logind[1554]: Removed session 12. Nov 8 00:02:20.858774 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 59790 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:20.860445 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:20.866029 systemd-logind[1554]: New session 13 of user core. Nov 8 00:02:20.877295 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 8 00:02:20.998191 sshd[4147]: Connection closed by 10.0.0.1 port 59790 Nov 8 00:02:20.998874 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:21.005251 systemd[1]: sshd@12-10.0.0.84:22-10.0.0.1:59790.service: Deactivated successfully. Nov 8 00:02:21.007358 systemd[1]: session-13.scope: Deactivated successfully. Nov 8 00:02:21.008103 systemd-logind[1554]: Session 13 logged out. Waiting for processes to exit. Nov 8 00:02:21.012645 systemd-logind[1554]: Removed session 13. Nov 8 00:02:26.016666 systemd[1]: Started sshd@13-10.0.0.84:22-10.0.0.1:59800.service - OpenSSH per-connection server daemon (10.0.0.1:59800). Nov 8 00:02:26.092722 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 59800 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:26.094711 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:26.103025 systemd-logind[1554]: New session 14 of user core. Nov 8 00:02:26.112367 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 8 00:02:26.241033 sshd[4164]: Connection closed by 10.0.0.1 port 59800 Nov 8 00:02:26.241423 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:26.246859 systemd[1]: sshd@13-10.0.0.84:22-10.0.0.1:59800.service: Deactivated successfully. Nov 8 00:02:26.248632 systemd[1]: session-14.scope: Deactivated successfully. Nov 8 00:02:26.249693 systemd-logind[1554]: Session 14 logged out. Waiting for processes to exit. Nov 8 00:02:26.250996 systemd-logind[1554]: Removed session 14. Nov 8 00:02:31.265667 systemd[1]: Started sshd@14-10.0.0.84:22-10.0.0.1:35896.service - OpenSSH per-connection server daemon (10.0.0.1:35896). Nov 8 00:02:31.343597 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 35896 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:31.345497 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:31.352489 systemd-logind[1554]: New session 15 of user core. Nov 8 00:02:31.375326 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 8 00:02:31.495451 sshd[4182]: Connection closed by 10.0.0.1 port 35896 Nov 8 00:02:31.496399 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:31.504987 systemd[1]: sshd@14-10.0.0.84:22-10.0.0.1:35896.service: Deactivated successfully. Nov 8 00:02:31.507137 systemd[1]: session-15.scope: Deactivated successfully. Nov 8 00:02:31.507837 systemd-logind[1554]: Session 15 logged out. Waiting for processes to exit. Nov 8 00:02:31.510484 systemd[1]: Started sshd@15-10.0.0.84:22-10.0.0.1:35898.service - OpenSSH per-connection server daemon (10.0.0.1:35898). Nov 8 00:02:31.511894 systemd-logind[1554]: Removed session 15. Nov 8 00:02:31.595891 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 35898 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:31.597279 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:31.601586 systemd-logind[1554]: New session 16 of user core. Nov 8 00:02:31.614500 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 8 00:02:31.831099 sshd[4199]: Connection closed by 10.0.0.1 port 35898 Nov 8 00:02:31.831673 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:31.839251 systemd[1]: sshd@15-10.0.0.84:22-10.0.0.1:35898.service: Deactivated successfully. Nov 8 00:02:31.841078 systemd[1]: session-16.scope: Deactivated successfully. Nov 8 00:02:31.841895 systemd-logind[1554]: Session 16 logged out. Waiting for processes to exit. Nov 8 00:02:31.845481 systemd[1]: Started sshd@16-10.0.0.84:22-10.0.0.1:35904.service - OpenSSH per-connection server daemon (10.0.0.1:35904). Nov 8 00:02:31.846050 systemd-logind[1554]: Removed session 16. Nov 8 00:02:31.915403 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 35904 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:31.916914 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:31.921784 systemd-logind[1554]: New session 17 of user core. Nov 8 00:02:31.928322 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 8 00:02:32.605796 sshd[4213]: Connection closed by 10.0.0.1 port 35904 Nov 8 00:02:32.603956 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:32.613155 systemd[1]: sshd@16-10.0.0.84:22-10.0.0.1:35904.service: Deactivated successfully. Nov 8 00:02:32.615796 systemd[1]: session-17.scope: Deactivated successfully. Nov 8 00:02:32.617357 systemd-logind[1554]: Session 17 logged out. Waiting for processes to exit. Nov 8 00:02:32.621941 systemd[1]: Started sshd@17-10.0.0.84:22-10.0.0.1:35918.service - OpenSSH per-connection server daemon (10.0.0.1:35918). Nov 8 00:02:32.627012 systemd-logind[1554]: Removed session 17. Nov 8 00:02:32.686234 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 35918 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:32.687133 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:32.691502 systemd-logind[1554]: New session 18 of user core. Nov 8 00:02:32.704246 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 8 00:02:32.938203 sshd[4237]: Connection closed by 10.0.0.1 port 35918 Nov 8 00:02:32.938880 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:32.947940 systemd[1]: sshd@17-10.0.0.84:22-10.0.0.1:35918.service: Deactivated successfully. Nov 8 00:02:32.949846 systemd[1]: session-18.scope: Deactivated successfully. Nov 8 00:02:32.951481 systemd-logind[1554]: Session 18 logged out. Waiting for processes to exit. Nov 8 00:02:32.955442 systemd[1]: Started sshd@18-10.0.0.84:22-10.0.0.1:35922.service - OpenSSH per-connection server daemon (10.0.0.1:35922). Nov 8 00:02:32.959760 systemd-logind[1554]: Removed session 18. Nov 8 00:02:33.014229 sshd[4249]: Accepted publickey for core from 10.0.0.1 port 35922 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:33.015806 sshd-session[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:33.022001 systemd-logind[1554]: New session 19 of user core. Nov 8 00:02:33.032277 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 8 00:02:33.141755 sshd[4252]: Connection closed by 10.0.0.1 port 35922 Nov 8 00:02:33.141572 sshd-session[4249]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:33.145704 systemd[1]: sshd@18-10.0.0.84:22-10.0.0.1:35922.service: Deactivated successfully. Nov 8 00:02:33.147733 systemd[1]: session-19.scope: Deactivated successfully. Nov 8 00:02:33.148544 systemd-logind[1554]: Session 19 logged out. Waiting for processes to exit. Nov 8 00:02:33.149873 systemd-logind[1554]: Removed session 19. Nov 8 00:02:38.158084 systemd[1]: Started sshd@19-10.0.0.84:22-10.0.0.1:35928.service - OpenSSH per-connection server daemon (10.0.0.1:35928). Nov 8 00:02:38.228685 sshd[4269]: Accepted publickey for core from 10.0.0.1 port 35928 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:38.231259 sshd-session[4269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:38.244631 systemd-logind[1554]: New session 20 of user core. Nov 8 00:02:38.251656 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 8 00:02:38.394686 sshd[4272]: Connection closed by 10.0.0.1 port 35928 Nov 8 00:02:38.395881 sshd-session[4269]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:38.402352 systemd[1]: sshd@19-10.0.0.84:22-10.0.0.1:35928.service: Deactivated successfully. Nov 8 00:02:38.404283 systemd[1]: session-20.scope: Deactivated successfully. Nov 8 00:02:38.405094 systemd-logind[1554]: Session 20 logged out. Waiting for processes to exit. Nov 8 00:02:38.407355 systemd-logind[1554]: Removed session 20. Nov 8 00:02:43.412996 systemd[1]: Started sshd@20-10.0.0.84:22-10.0.0.1:38530.service - OpenSSH per-connection server daemon (10.0.0.1:38530). Nov 8 00:02:43.483541 sshd[4287]: Accepted publickey for core from 10.0.0.1 port 38530 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:43.484890 sshd-session[4287]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:43.489849 systemd-logind[1554]: New session 21 of user core. Nov 8 00:02:43.496217 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 8 00:02:43.614314 sshd[4290]: Connection closed by 10.0.0.1 port 38530 Nov 8 00:02:43.614679 sshd-session[4287]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:43.618614 systemd-logind[1554]: Session 21 logged out. Waiting for processes to exit. Nov 8 00:02:43.619236 systemd[1]: sshd@20-10.0.0.84:22-10.0.0.1:38530.service: Deactivated successfully. Nov 8 00:02:43.621779 systemd[1]: session-21.scope: Deactivated successfully. Nov 8 00:02:43.623449 systemd-logind[1554]: Removed session 21. Nov 8 00:02:48.031226 kubelet[2702]: E1108 00:02:48.030873 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:48.636880 systemd[1]: Started sshd@21-10.0.0.84:22-10.0.0.1:38542.service - OpenSSH per-connection server daemon (10.0.0.1:38542). Nov 8 00:02:48.694839 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 38542 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:48.696441 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:48.701166 systemd-logind[1554]: New session 22 of user core. Nov 8 00:02:48.709293 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 8 00:02:48.827517 sshd[4308]: Connection closed by 10.0.0.1 port 38542 Nov 8 00:02:48.829269 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:48.837374 systemd[1]: sshd@21-10.0.0.84:22-10.0.0.1:38542.service: Deactivated successfully. Nov 8 00:02:48.839248 systemd[1]: session-22.scope: Deactivated successfully. Nov 8 00:02:48.841373 systemd-logind[1554]: Session 22 logged out. Waiting for processes to exit. Nov 8 00:02:48.845277 systemd[1]: Started sshd@22-10.0.0.84:22-10.0.0.1:38546.service - OpenSSH per-connection server daemon (10.0.0.1:38546). Nov 8 00:02:48.846183 systemd-logind[1554]: Removed session 22. Nov 8 00:02:48.912670 sshd[4322]: Accepted publickey for core from 10.0.0.1 port 38546 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:48.913918 sshd-session[4322]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:48.919507 systemd-logind[1554]: New session 23 of user core. Nov 8 00:02:48.935990 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 8 00:02:50.782085 containerd[1567]: time="2025-11-08T00:02:50.781375102Z" level=info msg="StopContainer for \"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\" with timeout 30 (s)" Nov 8 00:02:50.783409 containerd[1567]: time="2025-11-08T00:02:50.782408587Z" level=info msg="Stop container \"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\" with signal terminated" Nov 8 00:02:50.802980 systemd[1]: cri-containerd-c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5.scope: Deactivated successfully. Nov 8 00:02:50.807479 containerd[1567]: time="2025-11-08T00:02:50.804103805Z" level=info msg="received container exit event container_id:\"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\" id:\"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\" pid:3239 exited_at:{seconds:1762560170 nanos:803875644}" Nov 8 00:02:50.824485 containerd[1567]: time="2025-11-08T00:02:50.824441297Z" level=info msg="StopContainer for \"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\" with timeout 2 (s)" Nov 8 00:02:50.825062 containerd[1567]: time="2025-11-08T00:02:50.825037020Z" level=info msg="Stop container \"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\" with signal terminated" Nov 8 00:02:50.828323 containerd[1567]: time="2025-11-08T00:02:50.828258674Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 8 00:02:50.833861 systemd-networkd[1473]: lxc_health: Link DOWN Nov 8 00:02:50.833869 systemd-networkd[1473]: lxc_health: Lost carrier Nov 8 00:02:50.837229 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5-rootfs.mount: Deactivated successfully. Nov 8 00:02:50.849315 containerd[1567]: time="2025-11-08T00:02:50.849081848Z" level=info msg="StopContainer for \"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\" returns successfully" Nov 8 00:02:50.849688 containerd[1567]: time="2025-11-08T00:02:50.849645771Z" level=info msg="StopPodSandbox for \"f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9\"" Nov 8 00:02:50.853825 systemd[1]: cri-containerd-549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb.scope: Deactivated successfully. Nov 8 00:02:50.854161 systemd[1]: cri-containerd-549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb.scope: Consumed 6.545s CPU time, 122.1M memory peak, 124K read from disk, 12.9M written to disk. Nov 8 00:02:50.855142 containerd[1567]: time="2025-11-08T00:02:50.855085515Z" level=info msg="received container exit event container_id:\"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\" id:\"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\" pid:3352 exited_at:{seconds:1762560170 nanos:854869754}" Nov 8 00:02:50.858883 containerd[1567]: time="2025-11-08T00:02:50.858846492Z" level=info msg="Container to stop \"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:02:50.873469 systemd[1]: cri-containerd-f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9.scope: Deactivated successfully. Nov 8 00:02:50.874983 containerd[1567]: time="2025-11-08T00:02:50.874946885Z" level=info msg="received sandbox exit event container_id:\"f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9\" id:\"f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9\" exit_status:137 exited_at:{seconds:1762560170 nanos:874644844}" monitor_name=podsandbox Nov 8 00:02:50.885579 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb-rootfs.mount: Deactivated successfully. Nov 8 00:02:50.898900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9-rootfs.mount: Deactivated successfully. Nov 8 00:02:50.900811 containerd[1567]: time="2025-11-08T00:02:50.900779562Z" level=info msg="StopContainer for \"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\" returns successfully" Nov 8 00:02:50.901012 containerd[1567]: time="2025-11-08T00:02:50.900983323Z" level=info msg="shim disconnected" id=f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9 namespace=k8s.io Nov 8 00:02:50.910465 containerd[1567]: time="2025-11-08T00:02:50.901014483Z" level=warning msg="cleaning up after shim disconnected" id=f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9 namespace=k8s.io Nov 8 00:02:50.910568 containerd[1567]: time="2025-11-08T00:02:50.910487166Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:02:50.910568 containerd[1567]: time="2025-11-08T00:02:50.901271924Z" level=info msg="StopPodSandbox for \"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\"" Nov 8 00:02:50.910670 containerd[1567]: time="2025-11-08T00:02:50.910648006Z" level=info msg="Container to stop \"b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:02:50.910700 containerd[1567]: time="2025-11-08T00:02:50.910668447Z" level=info msg="Container to stop \"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:02:50.910700 containerd[1567]: time="2025-11-08T00:02:50.910678967Z" level=info msg="Container to stop \"e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:02:50.910700 containerd[1567]: time="2025-11-08T00:02:50.910687407Z" level=info msg="Container to stop \"adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:02:50.910758 containerd[1567]: time="2025-11-08T00:02:50.910695847Z" level=info msg="Container to stop \"b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 8 00:02:50.918408 systemd[1]: cri-containerd-2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547.scope: Deactivated successfully. Nov 8 00:02:50.919621 containerd[1567]: time="2025-11-08T00:02:50.919579447Z" level=info msg="received sandbox exit event container_id:\"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\" id:\"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\" exit_status:137 exited_at:{seconds:1762560170 nanos:919031284}" monitor_name=podsandbox Nov 8 00:02:50.933619 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9-shm.mount: Deactivated successfully. Nov 8 00:02:50.935295 containerd[1567]: time="2025-11-08T00:02:50.935253918Z" level=info msg="TearDown network for sandbox \"f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9\" successfully" Nov 8 00:02:50.935295 containerd[1567]: time="2025-11-08T00:02:50.935293918Z" level=info msg="StopPodSandbox for \"f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9\" returns successfully" Nov 8 00:02:50.939512 containerd[1567]: time="2025-11-08T00:02:50.939317776Z" level=info msg="received sandbox container exit event sandbox_id:\"f15565199e7afe469833eaab31990995581ff1470f6601357958e41621c870c9\" exit_status:137 exited_at:{seconds:1762560170 nanos:874644844}" monitor_name=criService Nov 8 00:02:50.943169 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547-rootfs.mount: Deactivated successfully. Nov 8 00:02:50.950752 containerd[1567]: time="2025-11-08T00:02:50.950685187Z" level=info msg="shim disconnected" id=2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547 namespace=k8s.io Nov 8 00:02:50.950752 containerd[1567]: time="2025-11-08T00:02:50.950722988Z" level=warning msg="cleaning up after shim disconnected" id=2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547 namespace=k8s.io Nov 8 00:02:50.950752 containerd[1567]: time="2025-11-08T00:02:50.950754188Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 8 00:02:50.962463 containerd[1567]: time="2025-11-08T00:02:50.962383920Z" level=info msg="received sandbox container exit event sandbox_id:\"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\" exit_status:137 exited_at:{seconds:1762560170 nanos:919031284}" monitor_name=criService Nov 8 00:02:50.976782 containerd[1567]: time="2025-11-08T00:02:50.976715305Z" level=info msg="TearDown network for sandbox \"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\" successfully" Nov 8 00:02:50.976782 containerd[1567]: time="2025-11-08T00:02:50.976760105Z" level=info msg="StopPodSandbox for \"2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547\" returns successfully" Nov 8 00:02:51.020939 kubelet[2702]: I1108 00:02:51.020875 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7nqr\" (UniqueName: \"kubernetes.io/projected/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-kube-api-access-d7nqr\") pod \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " Nov 8 00:02:51.020939 kubelet[2702]: I1108 00:02:51.020920 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-hubble-tls\") pod \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " Nov 8 00:02:51.020939 kubelet[2702]: I1108 00:02:51.020939 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-clustermesh-secrets\") pod \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " Nov 8 00:02:51.020939 kubelet[2702]: I1108 00:02:51.020953 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cilium-run\") pod \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " Nov 8 00:02:51.021418 kubelet[2702]: I1108 00:02:51.020967 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-lib-modules\") pod \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " Nov 8 00:02:51.021418 kubelet[2702]: I1108 00:02:51.020982 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-bpf-maps\") pod \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " Nov 8 00:02:51.021418 kubelet[2702]: I1108 00:02:51.020996 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-hostproc\") pod \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " Nov 8 00:02:51.021418 kubelet[2702]: I1108 00:02:51.021013 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cilium-cgroup\") pod \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " Nov 8 00:02:51.021418 kubelet[2702]: I1108 00:02:51.021027 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-xtables-lock\") pod \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " Nov 8 00:02:51.021418 kubelet[2702]: I1108 00:02:51.021041 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7bgmf\" (UniqueName: \"kubernetes.io/projected/af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9-kube-api-access-7bgmf\") pod \"af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9\" (UID: \"af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9\") " Nov 8 00:02:51.021543 kubelet[2702]: I1108 00:02:51.021102 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-host-proc-sys-net\") pod \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " Nov 8 00:02:51.021543 kubelet[2702]: I1108 00:02:51.021123 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cilium-config-path\") pod \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " Nov 8 00:02:51.021543 kubelet[2702]: I1108 00:02:51.021137 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-host-proc-sys-kernel\") pod \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " Nov 8 00:02:51.021543 kubelet[2702]: I1108 00:02:51.021152 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cni-path\") pod \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " Nov 8 00:02:51.021543 kubelet[2702]: I1108 00:02:51.021169 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-etc-cni-netd\") pod \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\" (UID: \"51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b\") " Nov 8 00:02:51.021543 kubelet[2702]: I1108 00:02:51.021187 2702 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9-cilium-config-path\") pod \"af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9\" (UID: \"af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9\") " Nov 8 00:02:51.022816 kubelet[2702]: I1108 00:02:51.022499 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" (UID: "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:02:51.022816 kubelet[2702]: I1108 00:02:51.022576 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" (UID: "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:02:51.023564 kubelet[2702]: I1108 00:02:51.023540 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" (UID: "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:02:51.023628 kubelet[2702]: I1108 00:02:51.023570 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-hostproc" (OuterVolumeSpecName: "hostproc") pod "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" (UID: "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:02:51.023628 kubelet[2702]: I1108 00:02:51.023590 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" (UID: "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:02:51.023628 kubelet[2702]: I1108 00:02:51.023604 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" (UID: "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:02:51.023628 kubelet[2702]: I1108 00:02:51.023616 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" (UID: "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:02:51.024151 kubelet[2702]: I1108 00:02:51.024119 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9" (UID: "af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:02:51.024456 kubelet[2702]: I1108 00:02:51.024336 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cni-path" (OuterVolumeSpecName: "cni-path") pod "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" (UID: "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:02:51.024456 kubelet[2702]: I1108 00:02:51.024398 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" (UID: "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:02:51.024735 kubelet[2702]: I1108 00:02:51.024417 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" (UID: "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 8 00:02:51.025795 kubelet[2702]: I1108 00:02:51.025698 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" (UID: "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 8 00:02:51.029596 kubelet[2702]: I1108 00:02:51.029543 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" (UID: "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 8 00:02:51.029596 kubelet[2702]: I1108 00:02:51.029542 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9-kube-api-access-7bgmf" (OuterVolumeSpecName: "kube-api-access-7bgmf") pod "af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9" (UID: "af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9"). InnerVolumeSpecName "kube-api-access-7bgmf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:02:51.029596 kubelet[2702]: I1108 00:02:51.029594 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" (UID: "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:02:51.029722 kubelet[2702]: I1108 00:02:51.029693 2702 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-kube-api-access-d7nqr" (OuterVolumeSpecName: "kube-api-access-d7nqr") pod "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" (UID: "51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b"). InnerVolumeSpecName "kube-api-access-d7nqr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 8 00:02:51.121917 kubelet[2702]: I1108 00:02:51.121786 2702 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.121917 kubelet[2702]: I1108 00:02:51.121833 2702 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.121917 kubelet[2702]: I1108 00:02:51.121842 2702 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.121917 kubelet[2702]: I1108 00:02:51.121851 2702 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.121917 kubelet[2702]: I1108 00:02:51.121859 2702 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.121917 kubelet[2702]: I1108 00:02:51.121868 2702 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.121917 kubelet[2702]: I1108 00:02:51.121879 2702 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7nqr\" (UniqueName: \"kubernetes.io/projected/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-kube-api-access-d7nqr\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.121917 kubelet[2702]: I1108 00:02:51.121887 2702 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.122195 kubelet[2702]: I1108 00:02:51.121931 2702 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.122195 kubelet[2702]: I1108 00:02:51.121942 2702 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.122195 kubelet[2702]: I1108 00:02:51.121951 2702 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.122195 kubelet[2702]: I1108 00:02:51.121959 2702 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.122195 kubelet[2702]: I1108 00:02:51.121974 2702 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.122195 kubelet[2702]: I1108 00:02:51.121983 2702 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.122195 kubelet[2702]: I1108 00:02:51.121992 2702 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.122195 kubelet[2702]: I1108 00:02:51.122001 2702 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7bgmf\" (UniqueName: \"kubernetes.io/projected/af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9-kube-api-access-7bgmf\") on node \"localhost\" DevicePath \"\"" Nov 8 00:02:51.315672 kubelet[2702]: I1108 00:02:51.315626 2702 scope.go:117] "RemoveContainer" containerID="c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5" Nov 8 00:02:51.319848 containerd[1567]: time="2025-11-08T00:02:51.319813582Z" level=info msg="RemoveContainer for \"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\"" Nov 8 00:02:51.321791 systemd[1]: Removed slice kubepods-besteffort-podaf99f6cc_07f5_48d9_b9a4_e9ead82d6ad9.slice - libcontainer container kubepods-besteffort-podaf99f6cc_07f5_48d9_b9a4_e9ead82d6ad9.slice. Nov 8 00:02:51.328176 systemd[1]: Removed slice kubepods-burstable-pod51d8a772_7cc1_4b91_bb29_ac0b7fb2a39b.slice - libcontainer container kubepods-burstable-pod51d8a772_7cc1_4b91_bb29_ac0b7fb2a39b.slice. Nov 8 00:02:51.328277 systemd[1]: kubepods-burstable-pod51d8a772_7cc1_4b91_bb29_ac0b7fb2a39b.slice: Consumed 6.634s CPU time, 122.4M memory peak, 128K read from disk, 12.9M written to disk. Nov 8 00:02:51.331241 containerd[1567]: time="2025-11-08T00:02:51.331206592Z" level=info msg="RemoveContainer for \"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\" returns successfully" Nov 8 00:02:51.331655 kubelet[2702]: I1108 00:02:51.331627 2702 scope.go:117] "RemoveContainer" containerID="c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5" Nov 8 00:02:51.332177 containerd[1567]: time="2025-11-08T00:02:51.332024796Z" level=error msg="ContainerStatus for \"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\": not found" Nov 8 00:02:51.332296 kubelet[2702]: E1108 00:02:51.332273 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\": not found" containerID="c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5" Nov 8 00:02:51.337181 kubelet[2702]: I1108 00:02:51.336600 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5"} err="failed to get container status \"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\": rpc error: code = NotFound desc = an error occurred when try to find container \"c11bab75e4c706610763dab7676943709306c6c1e9dd0bed9fbfbf8cc2d4d8a5\": not found" Nov 8 00:02:51.337181 kubelet[2702]: I1108 00:02:51.336721 2702 scope.go:117] "RemoveContainer" containerID="549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb" Nov 8 00:02:51.341585 containerd[1567]: time="2025-11-08T00:02:51.341508038Z" level=info msg="RemoveContainer for \"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\"" Nov 8 00:02:51.348165 containerd[1567]: time="2025-11-08T00:02:51.347620865Z" level=info msg="RemoveContainer for \"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\" returns successfully" Nov 8 00:02:51.348522 kubelet[2702]: I1108 00:02:51.348490 2702 scope.go:117] "RemoveContainer" containerID="b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380" Nov 8 00:02:51.350942 containerd[1567]: time="2025-11-08T00:02:51.350436317Z" level=info msg="RemoveContainer for \"b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380\"" Nov 8 00:02:51.354183 containerd[1567]: time="2025-11-08T00:02:51.354143693Z" level=info msg="RemoveContainer for \"b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380\" returns successfully" Nov 8 00:02:51.354517 kubelet[2702]: I1108 00:02:51.354487 2702 scope.go:117] "RemoveContainer" containerID="adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076" Nov 8 00:02:51.356959 containerd[1567]: time="2025-11-08T00:02:51.356927826Z" level=info msg="RemoveContainer for \"adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076\"" Nov 8 00:02:51.360754 containerd[1567]: time="2025-11-08T00:02:51.360709722Z" level=info msg="RemoveContainer for \"adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076\" returns successfully" Nov 8 00:02:51.360968 kubelet[2702]: I1108 00:02:51.360936 2702 scope.go:117] "RemoveContainer" containerID="b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70" Nov 8 00:02:51.362500 containerd[1567]: time="2025-11-08T00:02:51.362474570Z" level=info msg="RemoveContainer for \"b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70\"" Nov 8 00:02:51.366702 containerd[1567]: time="2025-11-08T00:02:51.366647749Z" level=info msg="RemoveContainer for \"b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70\" returns successfully" Nov 8 00:02:51.366888 kubelet[2702]: I1108 00:02:51.366860 2702 scope.go:117] "RemoveContainer" containerID="e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1" Nov 8 00:02:51.368600 containerd[1567]: time="2025-11-08T00:02:51.368571397Z" level=info msg="RemoveContainer for \"e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1\"" Nov 8 00:02:51.371593 containerd[1567]: time="2025-11-08T00:02:51.371558570Z" level=info msg="RemoveContainer for \"e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1\" returns successfully" Nov 8 00:02:51.371811 kubelet[2702]: I1108 00:02:51.371778 2702 scope.go:117] "RemoveContainer" containerID="549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb" Nov 8 00:02:51.372127 containerd[1567]: time="2025-11-08T00:02:51.371992972Z" level=error msg="ContainerStatus for \"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\": not found" Nov 8 00:02:51.372194 kubelet[2702]: E1108 00:02:51.372116 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\": not found" containerID="549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb" Nov 8 00:02:51.372194 kubelet[2702]: I1108 00:02:51.372138 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb"} err="failed to get container status \"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"549c3c3ffa7f56bff5a0a345903491fe9381a86d34a7fd54c7f6ea8fcc5727cb\": not found" Nov 8 00:02:51.372194 kubelet[2702]: I1108 00:02:51.372157 2702 scope.go:117] "RemoveContainer" containerID="b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380" Nov 8 00:02:51.372512 containerd[1567]: time="2025-11-08T00:02:51.372358614Z" level=error msg="ContainerStatus for \"b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380\": not found" Nov 8 00:02:51.372753 kubelet[2702]: E1108 00:02:51.372627 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380\": not found" containerID="b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380" Nov 8 00:02:51.372753 kubelet[2702]: I1108 00:02:51.372656 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380"} err="failed to get container status \"b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380\": rpc error: code = NotFound desc = an error occurred when try to find container \"b307bff0d2b76c4343c132f32bc74484a9186d8f97f3ab26a9a311dcd11cb380\": not found" Nov 8 00:02:51.372753 kubelet[2702]: I1108 00:02:51.372674 2702 scope.go:117] "RemoveContainer" containerID="adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076" Nov 8 00:02:51.372975 containerd[1567]: time="2025-11-08T00:02:51.372915616Z" level=error msg="ContainerStatus for \"adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076\": not found" Nov 8 00:02:51.373131 kubelet[2702]: E1108 00:02:51.373097 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076\": not found" containerID="adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076" Nov 8 00:02:51.373189 kubelet[2702]: I1108 00:02:51.373140 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076"} err="failed to get container status \"adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076\": rpc error: code = NotFound desc = an error occurred when try to find container \"adb54129e76822b421fbae6a1ddb96de800478cbd1fb9d56f2c2670b7f27f076\": not found" Nov 8 00:02:51.373189 kubelet[2702]: I1108 00:02:51.373159 2702 scope.go:117] "RemoveContainer" containerID="b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70" Nov 8 00:02:51.373412 containerd[1567]: time="2025-11-08T00:02:51.373360618Z" level=error msg="ContainerStatus for \"b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70\": not found" Nov 8 00:02:51.373553 kubelet[2702]: E1108 00:02:51.373531 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70\": not found" containerID="b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70" Nov 8 00:02:51.373593 kubelet[2702]: I1108 00:02:51.373563 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70"} err="failed to get container status \"b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70\": rpc error: code = NotFound desc = an error occurred when try to find container \"b4fcf63d38531fe150c6c6a1a188737851e49a0d40368e9287f1a9f4f0532a70\": not found" Nov 8 00:02:51.373593 kubelet[2702]: I1108 00:02:51.373577 2702 scope.go:117] "RemoveContainer" containerID="e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1" Nov 8 00:02:51.373778 containerd[1567]: time="2025-11-08T00:02:51.373745380Z" level=error msg="ContainerStatus for \"e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1\": not found" Nov 8 00:02:51.373934 kubelet[2702]: E1108 00:02:51.373913 2702 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1\": not found" containerID="e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1" Nov 8 00:02:51.373969 kubelet[2702]: I1108 00:02:51.373937 2702 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1"} err="failed to get container status \"e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"e63661b1fb893e01d9d6038c8b9fb36e587032af24ac79b36e7d6f0a2f4885c1\": not found" Nov 8 00:02:51.837269 systemd[1]: var-lib-kubelet-pods-af99f6cc\x2d07f5\x2d48d9\x2db9a4\x2de9ead82d6ad9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7bgmf.mount: Deactivated successfully. Nov 8 00:02:51.837364 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2e6ee78767f8f22d279884aafb0a62adf4f1c31bba9a8311cd37d7bdb64d5547-shm.mount: Deactivated successfully. Nov 8 00:02:51.837435 systemd[1]: var-lib-kubelet-pods-51d8a772\x2d7cc1\x2d4b91\x2dbb29\x2dac0b7fb2a39b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd7nqr.mount: Deactivated successfully. Nov 8 00:02:51.837484 systemd[1]: var-lib-kubelet-pods-51d8a772\x2d7cc1\x2d4b91\x2dbb29\x2dac0b7fb2a39b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 8 00:02:51.837536 systemd[1]: var-lib-kubelet-pods-51d8a772\x2d7cc1\x2d4b91\x2dbb29\x2dac0b7fb2a39b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 8 00:02:52.031554 kubelet[2702]: I1108 00:02:52.031502 2702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" path="/var/lib/kubelet/pods/51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b/volumes" Nov 8 00:02:52.032555 kubelet[2702]: I1108 00:02:52.032146 2702 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9" path="/var/lib/kubelet/pods/af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9/volumes" Nov 8 00:02:52.720101 sshd[4325]: Connection closed by 10.0.0.1 port 38546 Nov 8 00:02:52.720412 sshd-session[4322]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:52.727315 systemd[1]: sshd@22-10.0.0.84:22-10.0.0.1:38546.service: Deactivated successfully. Nov 8 00:02:52.730107 systemd[1]: session-23.scope: Deactivated successfully. Nov 8 00:02:52.730396 systemd[1]: session-23.scope: Consumed 1.099s CPU time, 24.2M memory peak. Nov 8 00:02:52.730859 systemd-logind[1554]: Session 23 logged out. Waiting for processes to exit. Nov 8 00:02:52.733584 systemd[1]: Started sshd@23-10.0.0.84:22-10.0.0.1:51322.service - OpenSSH per-connection server daemon (10.0.0.1:51322). Nov 8 00:02:52.734118 systemd-logind[1554]: Removed session 23. Nov 8 00:02:52.799473 sshd[4471]: Accepted publickey for core from 10.0.0.1 port 51322 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:52.800903 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:52.805884 systemd-logind[1554]: New session 24 of user core. Nov 8 00:02:52.819284 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 8 00:02:53.116974 kubelet[2702]: E1108 00:02:53.116746 2702 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 8 00:02:54.494912 sshd[4474]: Connection closed by 10.0.0.1 port 51322 Nov 8 00:02:54.496302 sshd-session[4471]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:54.513967 systemd[1]: sshd@23-10.0.0.84:22-10.0.0.1:51322.service: Deactivated successfully. Nov 8 00:02:54.516756 systemd[1]: session-24.scope: Deactivated successfully. Nov 8 00:02:54.519318 systemd[1]: session-24.scope: Consumed 1.520s CPU time, 26.1M memory peak. Nov 8 00:02:54.521068 systemd-logind[1554]: Session 24 logged out. Waiting for processes to exit. Nov 8 00:02:54.525420 systemd[1]: Started sshd@24-10.0.0.84:22-10.0.0.1:51334.service - OpenSSH per-connection server daemon (10.0.0.1:51334). Nov 8 00:02:54.529435 systemd-logind[1554]: Removed session 24. Nov 8 00:02:54.531273 kubelet[2702]: I1108 00:02:54.531239 2702 memory_manager.go:355] "RemoveStaleState removing state" podUID="51d8a772-7cc1-4b91-bb29-ac0b7fb2a39b" containerName="cilium-agent" Nov 8 00:02:54.532302 kubelet[2702]: I1108 00:02:54.531525 2702 memory_manager.go:355] "RemoveStaleState removing state" podUID="af99f6cc-07f5-48d9-b9a4-e9ead82d6ad9" containerName="cilium-operator" Nov 8 00:02:54.543627 systemd[1]: Created slice kubepods-burstable-pod6baeb493_5d70_4509_b38d_9ff4247b81ad.slice - libcontainer container kubepods-burstable-pod6baeb493_5d70_4509_b38d_9ff4247b81ad.slice. Nov 8 00:02:54.591735 sshd[4486]: Accepted publickey for core from 10.0.0.1 port 51334 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:54.593118 sshd-session[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:54.598036 systemd-logind[1554]: New session 25 of user core. Nov 8 00:02:54.607265 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 8 00:02:54.643872 kubelet[2702]: I1108 00:02:54.643792 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/6baeb493-5d70-4509-b38d-9ff4247b81ad-cilium-cgroup\") pod \"cilium-6qsds\" (UID: \"6baeb493-5d70-4509-b38d-9ff4247b81ad\") " pod="kube-system/cilium-6qsds" Nov 8 00:02:54.643872 kubelet[2702]: I1108 00:02:54.643840 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6baeb493-5d70-4509-b38d-9ff4247b81ad-xtables-lock\") pod \"cilium-6qsds\" (UID: \"6baeb493-5d70-4509-b38d-9ff4247b81ad\") " pod="kube-system/cilium-6qsds" Nov 8 00:02:54.644251 kubelet[2702]: I1108 00:02:54.644095 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/6baeb493-5d70-4509-b38d-9ff4247b81ad-cilium-ipsec-secrets\") pod \"cilium-6qsds\" (UID: \"6baeb493-5d70-4509-b38d-9ff4247b81ad\") " pod="kube-system/cilium-6qsds" Nov 8 00:02:54.644251 kubelet[2702]: I1108 00:02:54.644131 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rtpdf\" (UniqueName: \"kubernetes.io/projected/6baeb493-5d70-4509-b38d-9ff4247b81ad-kube-api-access-rtpdf\") pod \"cilium-6qsds\" (UID: \"6baeb493-5d70-4509-b38d-9ff4247b81ad\") " pod="kube-system/cilium-6qsds" Nov 8 00:02:54.644251 kubelet[2702]: I1108 00:02:54.644150 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/6baeb493-5d70-4509-b38d-9ff4247b81ad-host-proc-sys-net\") pod \"cilium-6qsds\" (UID: \"6baeb493-5d70-4509-b38d-9ff4247b81ad\") " pod="kube-system/cilium-6qsds" Nov 8 00:02:54.644251 kubelet[2702]: I1108 00:02:54.644167 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/6baeb493-5d70-4509-b38d-9ff4247b81ad-host-proc-sys-kernel\") pod \"cilium-6qsds\" (UID: \"6baeb493-5d70-4509-b38d-9ff4247b81ad\") " pod="kube-system/cilium-6qsds" Nov 8 00:02:54.644251 kubelet[2702]: I1108 00:02:54.644186 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/6baeb493-5d70-4509-b38d-9ff4247b81ad-hostproc\") pod \"cilium-6qsds\" (UID: \"6baeb493-5d70-4509-b38d-9ff4247b81ad\") " pod="kube-system/cilium-6qsds" Nov 8 00:02:54.644403 kubelet[2702]: I1108 00:02:54.644203 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6baeb493-5d70-4509-b38d-9ff4247b81ad-lib-modules\") pod \"cilium-6qsds\" (UID: \"6baeb493-5d70-4509-b38d-9ff4247b81ad\") " pod="kube-system/cilium-6qsds" Nov 8 00:02:54.644403 kubelet[2702]: I1108 00:02:54.644219 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/6baeb493-5d70-4509-b38d-9ff4247b81ad-bpf-maps\") pod \"cilium-6qsds\" (UID: \"6baeb493-5d70-4509-b38d-9ff4247b81ad\") " pod="kube-system/cilium-6qsds" Nov 8 00:02:54.644608 kubelet[2702]: I1108 00:02:54.644475 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/6baeb493-5d70-4509-b38d-9ff4247b81ad-hubble-tls\") pod \"cilium-6qsds\" (UID: \"6baeb493-5d70-4509-b38d-9ff4247b81ad\") " pod="kube-system/cilium-6qsds" Nov 8 00:02:54.644608 kubelet[2702]: I1108 00:02:54.644505 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/6baeb493-5d70-4509-b38d-9ff4247b81ad-cni-path\") pod \"cilium-6qsds\" (UID: \"6baeb493-5d70-4509-b38d-9ff4247b81ad\") " pod="kube-system/cilium-6qsds" Nov 8 00:02:54.644608 kubelet[2702]: I1108 00:02:54.644518 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6baeb493-5d70-4509-b38d-9ff4247b81ad-etc-cni-netd\") pod \"cilium-6qsds\" (UID: \"6baeb493-5d70-4509-b38d-9ff4247b81ad\") " pod="kube-system/cilium-6qsds" Nov 8 00:02:54.644608 kubelet[2702]: I1108 00:02:54.644533 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/6baeb493-5d70-4509-b38d-9ff4247b81ad-clustermesh-secrets\") pod \"cilium-6qsds\" (UID: \"6baeb493-5d70-4509-b38d-9ff4247b81ad\") " pod="kube-system/cilium-6qsds" Nov 8 00:02:54.644608 kubelet[2702]: I1108 00:02:54.644549 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6baeb493-5d70-4509-b38d-9ff4247b81ad-cilium-config-path\") pod \"cilium-6qsds\" (UID: \"6baeb493-5d70-4509-b38d-9ff4247b81ad\") " pod="kube-system/cilium-6qsds" Nov 8 00:02:54.644608 kubelet[2702]: I1108 00:02:54.644566 2702 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/6baeb493-5d70-4509-b38d-9ff4247b81ad-cilium-run\") pod \"cilium-6qsds\" (UID: \"6baeb493-5d70-4509-b38d-9ff4247b81ad\") " pod="kube-system/cilium-6qsds" Nov 8 00:02:54.659421 sshd[4489]: Connection closed by 10.0.0.1 port 51334 Nov 8 00:02:54.659791 sshd-session[4486]: pam_unix(sshd:session): session closed for user core Nov 8 00:02:54.670368 systemd[1]: sshd@24-10.0.0.84:22-10.0.0.1:51334.service: Deactivated successfully. Nov 8 00:02:54.671980 systemd[1]: session-25.scope: Deactivated successfully. Nov 8 00:02:54.674618 systemd-logind[1554]: Session 25 logged out. Waiting for processes to exit. Nov 8 00:02:54.676477 systemd[1]: Started sshd@25-10.0.0.84:22-10.0.0.1:51342.service - OpenSSH per-connection server daemon (10.0.0.1:51342). Nov 8 00:02:54.677158 systemd-logind[1554]: Removed session 25. Nov 8 00:02:54.736685 sshd[4496]: Accepted publickey for core from 10.0.0.1 port 51342 ssh2: RSA SHA256:FAVExuDlYq3gF2W1zNPEB/OEHrl6bpWJ51XPtNkFj+Y Nov 8 00:02:54.738028 sshd-session[4496]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 8 00:02:54.743120 systemd-logind[1554]: New session 26 of user core. Nov 8 00:02:54.751696 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 8 00:02:54.848536 kubelet[2702]: E1108 00:02:54.848490 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:54.849115 containerd[1567]: time="2025-11-08T00:02:54.849079848Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6qsds,Uid:6baeb493-5d70-4509-b38d-9ff4247b81ad,Namespace:kube-system,Attempt:0,}" Nov 8 00:02:54.870448 containerd[1567]: time="2025-11-08T00:02:54.870404816Z" level=info msg="connecting to shim 50639cb2bc3966cbf061d0411cc131061f54990931f05e77f6d0fb77e3d49293" address="unix:///run/containerd/s/fafe49c2464df36765ce61eeb40fb4b214c0cff800b2e5fb316b06e0ede187ea" namespace=k8s.io protocol=ttrpc version=3 Nov 8 00:02:54.900249 systemd[1]: Started cri-containerd-50639cb2bc3966cbf061d0411cc131061f54990931f05e77f6d0fb77e3d49293.scope - libcontainer container 50639cb2bc3966cbf061d0411cc131061f54990931f05e77f6d0fb77e3d49293. Nov 8 00:02:54.926420 containerd[1567]: time="2025-11-08T00:02:54.926369966Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6qsds,Uid:6baeb493-5d70-4509-b38d-9ff4247b81ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"50639cb2bc3966cbf061d0411cc131061f54990931f05e77f6d0fb77e3d49293\"" Nov 8 00:02:54.927817 kubelet[2702]: E1108 00:02:54.927317 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:54.929455 containerd[1567]: time="2025-11-08T00:02:54.929419819Z" level=info msg="CreateContainer within sandbox \"50639cb2bc3966cbf061d0411cc131061f54990931f05e77f6d0fb77e3d49293\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 8 00:02:54.936393 containerd[1567]: time="2025-11-08T00:02:54.936344928Z" level=info msg="Container dd379bac8e9a038ccbb8e565e88f45abde62fc038f98f66dfd1e0c61f7687a84: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:02:54.942110 containerd[1567]: time="2025-11-08T00:02:54.942041311Z" level=info msg="CreateContainer within sandbox \"50639cb2bc3966cbf061d0411cc131061f54990931f05e77f6d0fb77e3d49293\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"dd379bac8e9a038ccbb8e565e88f45abde62fc038f98f66dfd1e0c61f7687a84\"" Nov 8 00:02:54.942742 containerd[1567]: time="2025-11-08T00:02:54.942710954Z" level=info msg="StartContainer for \"dd379bac8e9a038ccbb8e565e88f45abde62fc038f98f66dfd1e0c61f7687a84\"" Nov 8 00:02:54.949872 containerd[1567]: time="2025-11-08T00:02:54.949786183Z" level=info msg="connecting to shim dd379bac8e9a038ccbb8e565e88f45abde62fc038f98f66dfd1e0c61f7687a84" address="unix:///run/containerd/s/fafe49c2464df36765ce61eeb40fb4b214c0cff800b2e5fb316b06e0ede187ea" protocol=ttrpc version=3 Nov 8 00:02:54.983360 systemd[1]: Started cri-containerd-dd379bac8e9a038ccbb8e565e88f45abde62fc038f98f66dfd1e0c61f7687a84.scope - libcontainer container dd379bac8e9a038ccbb8e565e88f45abde62fc038f98f66dfd1e0c61f7687a84. Nov 8 00:02:55.015272 containerd[1567]: time="2025-11-08T00:02:55.015141091Z" level=info msg="StartContainer for \"dd379bac8e9a038ccbb8e565e88f45abde62fc038f98f66dfd1e0c61f7687a84\" returns successfully" Nov 8 00:02:55.023299 systemd[1]: cri-containerd-dd379bac8e9a038ccbb8e565e88f45abde62fc038f98f66dfd1e0c61f7687a84.scope: Deactivated successfully. Nov 8 00:02:55.028599 containerd[1567]: time="2025-11-08T00:02:55.028502665Z" level=info msg="received container exit event container_id:\"dd379bac8e9a038ccbb8e565e88f45abde62fc038f98f66dfd1e0c61f7687a84\" id:\"dd379bac8e9a038ccbb8e565e88f45abde62fc038f98f66dfd1e0c61f7687a84\" pid:4568 exited_at:{seconds:1762560175 nanos:28219063}" Nov 8 00:02:55.333562 kubelet[2702]: E1108 00:02:55.333363 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:55.336772 containerd[1567]: time="2025-11-08T00:02:55.336135303Z" level=info msg="CreateContainer within sandbox \"50639cb2bc3966cbf061d0411cc131061f54990931f05e77f6d0fb77e3d49293\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 8 00:02:55.348625 containerd[1567]: time="2025-11-08T00:02:55.348585514Z" level=info msg="Container e4b143d1a43787363c4c5d9fd8c928ff6e6b6ce7a93a808ae0c8591782dcc806: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:02:55.355919 containerd[1567]: time="2025-11-08T00:02:55.355869583Z" level=info msg="CreateContainer within sandbox \"50639cb2bc3966cbf061d0411cc131061f54990931f05e77f6d0fb77e3d49293\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e4b143d1a43787363c4c5d9fd8c928ff6e6b6ce7a93a808ae0c8591782dcc806\"" Nov 8 00:02:55.357788 containerd[1567]: time="2025-11-08T00:02:55.356639186Z" level=info msg="StartContainer for \"e4b143d1a43787363c4c5d9fd8c928ff6e6b6ce7a93a808ae0c8591782dcc806\"" Nov 8 00:02:55.357788 containerd[1567]: time="2025-11-08T00:02:55.357515590Z" level=info msg="connecting to shim e4b143d1a43787363c4c5d9fd8c928ff6e6b6ce7a93a808ae0c8591782dcc806" address="unix:///run/containerd/s/fafe49c2464df36765ce61eeb40fb4b214c0cff800b2e5fb316b06e0ede187ea" protocol=ttrpc version=3 Nov 8 00:02:55.378239 systemd[1]: Started cri-containerd-e4b143d1a43787363c4c5d9fd8c928ff6e6b6ce7a93a808ae0c8591782dcc806.scope - libcontainer container e4b143d1a43787363c4c5d9fd8c928ff6e6b6ce7a93a808ae0c8591782dcc806. Nov 8 00:02:55.403864 containerd[1567]: time="2025-11-08T00:02:55.403829256Z" level=info msg="StartContainer for \"e4b143d1a43787363c4c5d9fd8c928ff6e6b6ce7a93a808ae0c8591782dcc806\" returns successfully" Nov 8 00:02:55.411270 systemd[1]: cri-containerd-e4b143d1a43787363c4c5d9fd8c928ff6e6b6ce7a93a808ae0c8591782dcc806.scope: Deactivated successfully. Nov 8 00:02:55.413218 containerd[1567]: time="2025-11-08T00:02:55.411627007Z" level=info msg="received container exit event container_id:\"e4b143d1a43787363c4c5d9fd8c928ff6e6b6ce7a93a808ae0c8591782dcc806\" id:\"e4b143d1a43787363c4c5d9fd8c928ff6e6b6ce7a93a808ae0c8591782dcc806\" pid:4618 exited_at:{seconds:1762560175 nanos:411427887}" Nov 8 00:02:56.338642 kubelet[2702]: E1108 00:02:56.338586 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:56.351844 containerd[1567]: time="2025-11-08T00:02:56.351798043Z" level=info msg="CreateContainer within sandbox \"50639cb2bc3966cbf061d0411cc131061f54990931f05e77f6d0fb77e3d49293\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 8 00:02:56.386655 containerd[1567]: time="2025-11-08T00:02:56.386594780Z" level=info msg="Container 1a3e35cdb90dc208c307d1f8c84c48405e8a64fa529a4ad49e591180bd3def93: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:02:56.405267 containerd[1567]: time="2025-11-08T00:02:56.405166613Z" level=info msg="CreateContainer within sandbox \"50639cb2bc3966cbf061d0411cc131061f54990931f05e77f6d0fb77e3d49293\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1a3e35cdb90dc208c307d1f8c84c48405e8a64fa529a4ad49e591180bd3def93\"" Nov 8 00:02:56.405774 containerd[1567]: time="2025-11-08T00:02:56.405733855Z" level=info msg="StartContainer for \"1a3e35cdb90dc208c307d1f8c84c48405e8a64fa529a4ad49e591180bd3def93\"" Nov 8 00:02:56.407376 containerd[1567]: time="2025-11-08T00:02:56.407340142Z" level=info msg="connecting to shim 1a3e35cdb90dc208c307d1f8c84c48405e8a64fa529a4ad49e591180bd3def93" address="unix:///run/containerd/s/fafe49c2464df36765ce61eeb40fb4b214c0cff800b2e5fb316b06e0ede187ea" protocol=ttrpc version=3 Nov 8 00:02:56.436259 systemd[1]: Started cri-containerd-1a3e35cdb90dc208c307d1f8c84c48405e8a64fa529a4ad49e591180bd3def93.scope - libcontainer container 1a3e35cdb90dc208c307d1f8c84c48405e8a64fa529a4ad49e591180bd3def93. Nov 8 00:02:56.524891 systemd[1]: cri-containerd-1a3e35cdb90dc208c307d1f8c84c48405e8a64fa529a4ad49e591180bd3def93.scope: Deactivated successfully. Nov 8 00:02:56.526956 containerd[1567]: time="2025-11-08T00:02:56.526916212Z" level=info msg="received container exit event container_id:\"1a3e35cdb90dc208c307d1f8c84c48405e8a64fa529a4ad49e591180bd3def93\" id:\"1a3e35cdb90dc208c307d1f8c84c48405e8a64fa529a4ad49e591180bd3def93\" pid:4662 exited_at:{seconds:1762560176 nanos:526538931}" Nov 8 00:02:56.535102 containerd[1567]: time="2025-11-08T00:02:56.535022204Z" level=info msg="StartContainer for \"1a3e35cdb90dc208c307d1f8c84c48405e8a64fa529a4ad49e591180bd3def93\" returns successfully" Nov 8 00:02:57.028867 kubelet[2702]: E1108 00:02:57.028829 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:57.344325 kubelet[2702]: E1108 00:02:57.343567 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:57.349471 containerd[1567]: time="2025-11-08T00:02:57.349034780Z" level=info msg="CreateContainer within sandbox \"50639cb2bc3966cbf061d0411cc131061f54990931f05e77f6d0fb77e3d49293\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 8 00:02:57.367749 containerd[1567]: time="2025-11-08T00:02:57.367704212Z" level=info msg="Container d9145510671d0958c2a30a6ba0a035e8041c5a69c37d1380d15c1b73aa188fad: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:02:57.378267 containerd[1567]: time="2025-11-08T00:02:57.378213572Z" level=info msg="CreateContainer within sandbox \"50639cb2bc3966cbf061d0411cc131061f54990931f05e77f6d0fb77e3d49293\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d9145510671d0958c2a30a6ba0a035e8041c5a69c37d1380d15c1b73aa188fad\"" Nov 8 00:02:57.378855 containerd[1567]: time="2025-11-08T00:02:57.378814494Z" level=info msg="StartContainer for \"d9145510671d0958c2a30a6ba0a035e8041c5a69c37d1380d15c1b73aa188fad\"" Nov 8 00:02:57.380842 containerd[1567]: time="2025-11-08T00:02:57.380815022Z" level=info msg="connecting to shim d9145510671d0958c2a30a6ba0a035e8041c5a69c37d1380d15c1b73aa188fad" address="unix:///run/containerd/s/fafe49c2464df36765ce61eeb40fb4b214c0cff800b2e5fb316b06e0ede187ea" protocol=ttrpc version=3 Nov 8 00:02:57.400244 systemd[1]: Started cri-containerd-d9145510671d0958c2a30a6ba0a035e8041c5a69c37d1380d15c1b73aa188fad.scope - libcontainer container d9145510671d0958c2a30a6ba0a035e8041c5a69c37d1380d15c1b73aa188fad. Nov 8 00:02:57.423483 systemd[1]: cri-containerd-d9145510671d0958c2a30a6ba0a035e8041c5a69c37d1380d15c1b73aa188fad.scope: Deactivated successfully. Nov 8 00:02:57.426636 containerd[1567]: time="2025-11-08T00:02:57.426580398Z" level=info msg="received container exit event container_id:\"d9145510671d0958c2a30a6ba0a035e8041c5a69c37d1380d15c1b73aa188fad\" id:\"d9145510671d0958c2a30a6ba0a035e8041c5a69c37d1380d15c1b73aa188fad\" pid:4700 exited_at:{seconds:1762560177 nanos:423775227}" Nov 8 00:02:57.434675 containerd[1567]: time="2025-11-08T00:02:57.434575989Z" level=info msg="StartContainer for \"d9145510671d0958c2a30a6ba0a035e8041c5a69c37d1380d15c1b73aa188fad\" returns successfully" Nov 8 00:02:57.755022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d9145510671d0958c2a30a6ba0a035e8041c5a69c37d1380d15c1b73aa188fad-rootfs.mount: Deactivated successfully. Nov 8 00:02:58.118304 kubelet[2702]: E1108 00:02:58.118137 2702 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 8 00:02:58.349766 kubelet[2702]: E1108 00:02:58.349711 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:58.353852 containerd[1567]: time="2025-11-08T00:02:58.353737500Z" level=info msg="CreateContainer within sandbox \"50639cb2bc3966cbf061d0411cc131061f54990931f05e77f6d0fb77e3d49293\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 8 00:02:58.373945 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4018777666.mount: Deactivated successfully. Nov 8 00:02:58.375368 containerd[1567]: time="2025-11-08T00:02:58.375156700Z" level=info msg="Container a51bc8119a68d668cb04497ad87cf9014f2be7712cc85719fec169424d2f43a9: CDI devices from CRI Config.CDIDevices: []" Nov 8 00:02:58.378039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1516820873.mount: Deactivated successfully. Nov 8 00:02:58.395398 containerd[1567]: time="2025-11-08T00:02:58.395342776Z" level=info msg="CreateContainer within sandbox \"50639cb2bc3966cbf061d0411cc131061f54990931f05e77f6d0fb77e3d49293\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a51bc8119a68d668cb04497ad87cf9014f2be7712cc85719fec169424d2f43a9\"" Nov 8 00:02:58.399668 containerd[1567]: time="2025-11-08T00:02:58.399626552Z" level=info msg="StartContainer for \"a51bc8119a68d668cb04497ad87cf9014f2be7712cc85719fec169424d2f43a9\"" Nov 8 00:02:58.401827 containerd[1567]: time="2025-11-08T00:02:58.401769721Z" level=info msg="connecting to shim a51bc8119a68d668cb04497ad87cf9014f2be7712cc85719fec169424d2f43a9" address="unix:///run/containerd/s/fafe49c2464df36765ce61eeb40fb4b214c0cff800b2e5fb316b06e0ede187ea" protocol=ttrpc version=3 Nov 8 00:02:58.433273 systemd[1]: Started cri-containerd-a51bc8119a68d668cb04497ad87cf9014f2be7712cc85719fec169424d2f43a9.scope - libcontainer container a51bc8119a68d668cb04497ad87cf9014f2be7712cc85719fec169424d2f43a9. Nov 8 00:02:58.490296 containerd[1567]: time="2025-11-08T00:02:58.490248054Z" level=info msg="StartContainer for \"a51bc8119a68d668cb04497ad87cf9014f2be7712cc85719fec169424d2f43a9\" returns successfully" Nov 8 00:02:58.762093 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Nov 8 00:02:59.355921 kubelet[2702]: E1108 00:02:59.355835 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:02:59.392082 kubelet[2702]: I1108 00:02:59.391948 2702 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6qsds" podStartSLOduration=5.3919293790000005 podStartE2EDuration="5.391929379s" podCreationTimestamp="2025-11-08 00:02:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-08 00:02:59.391651858 +0000 UTC m=+81.431537507" watchObservedRunningTime="2025-11-08 00:02:59.391929379 +0000 UTC m=+81.431815068" Nov 8 00:03:00.098074 kubelet[2702]: I1108 00:03:00.097968 2702 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-08T00:03:00Z","lastTransitionTime":"2025-11-08T00:03:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 8 00:03:00.850007 kubelet[2702]: E1108 00:03:00.849879 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:03:01.690115 systemd-networkd[1473]: lxc_health: Link UP Nov 8 00:03:01.705366 systemd-networkd[1473]: lxc_health: Gained carrier Nov 8 00:03:02.850354 kubelet[2702]: E1108 00:03:02.850310 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:03:03.365811 kubelet[2702]: E1108 00:03:03.365765 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:03:03.432083 kubelet[2702]: E1108 00:03:03.432004 2702 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48094->127.0.0.1:39995: write tcp 127.0.0.1:48094->127.0.0.1:39995: write: broken pipe Nov 8 00:03:03.627307 systemd-networkd[1473]: lxc_health: Gained IPv6LL Nov 8 00:03:04.367559 kubelet[2702]: E1108 00:03:04.367512 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:03:07.029036 kubelet[2702]: E1108 00:03:07.028990 2702 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 8 00:03:07.715163 sshd[4503]: Connection closed by 10.0.0.1 port 51342 Nov 8 00:03:07.715643 sshd-session[4496]: pam_unix(sshd:session): session closed for user core Nov 8 00:03:07.719695 systemd[1]: sshd@25-10.0.0.84:22-10.0.0.1:51342.service: Deactivated successfully. Nov 8 00:03:07.721590 systemd[1]: session-26.scope: Deactivated successfully. Nov 8 00:03:07.722394 systemd-logind[1554]: Session 26 logged out. Waiting for processes to exit. Nov 8 00:03:07.723819 systemd-logind[1554]: Removed session 26.