Nov 5 14:58:44.330220 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 5 14:58:44.330244 kernel: Linux version 6.12.54-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Wed Nov 5 13:42:06 -00 2025 Nov 5 14:58:44.330253 kernel: KASLR enabled Nov 5 14:58:44.330259 kernel: efi: EFI v2.7 by EDK II Nov 5 14:58:44.330273 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Nov 5 14:58:44.330279 kernel: random: crng init done Nov 5 14:58:44.330286 kernel: secureboot: Secure boot disabled Nov 5 14:58:44.330292 kernel: ACPI: Early table checksum verification disabled Nov 5 14:58:44.330300 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Nov 5 14:58:44.330306 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 5 14:58:44.330313 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:44.330319 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:44.330325 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:44.330331 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:44.330340 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:44.330346 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:44.330353 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:44.330359 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:44.330366 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 5 14:58:44.330372 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 5 14:58:44.330378 kernel: ACPI: Use ACPI SPCR as default console: No Nov 5 14:58:44.330385 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 5 14:58:44.330393 kernel: NODE_DATA(0) allocated [mem 0xdc964a00-0xdc96bfff] Nov 5 14:58:44.330399 kernel: Zone ranges: Nov 5 14:58:44.330405 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 5 14:58:44.330412 kernel: DMA32 empty Nov 5 14:58:44.330418 kernel: Normal empty Nov 5 14:58:44.330424 kernel: Device empty Nov 5 14:58:44.330430 kernel: Movable zone start for each node Nov 5 14:58:44.330437 kernel: Early memory node ranges Nov 5 14:58:44.330443 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Nov 5 14:58:44.330449 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Nov 5 14:58:44.330456 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Nov 5 14:58:44.330462 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Nov 5 14:58:44.330470 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Nov 5 14:58:44.330476 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Nov 5 14:58:44.330483 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Nov 5 14:58:44.330489 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Nov 5 14:58:44.330495 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Nov 5 14:58:44.330502 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 5 14:58:44.330512 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 5 14:58:44.330518 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 5 14:58:44.330525 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 5 14:58:44.330532 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 5 14:58:44.330539 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 5 14:58:44.330546 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Nov 5 14:58:44.330553 kernel: psci: probing for conduit method from ACPI. Nov 5 14:58:44.330559 kernel: psci: PSCIv1.1 detected in firmware. Nov 5 14:58:44.330567 kernel: psci: Using standard PSCI v0.2 function IDs Nov 5 14:58:44.330574 kernel: psci: Trusted OS migration not required Nov 5 14:58:44.330581 kernel: psci: SMC Calling Convention v1.1 Nov 5 14:58:44.330588 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 5 14:58:44.330595 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Nov 5 14:58:44.330602 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Nov 5 14:58:44.330609 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 5 14:58:44.330616 kernel: Detected PIPT I-cache on CPU0 Nov 5 14:58:44.330623 kernel: CPU features: detected: GIC system register CPU interface Nov 5 14:58:44.330630 kernel: CPU features: detected: Spectre-v4 Nov 5 14:58:44.330637 kernel: CPU features: detected: Spectre-BHB Nov 5 14:58:44.330645 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 5 14:58:44.330652 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 5 14:58:44.330659 kernel: CPU features: detected: ARM erratum 1418040 Nov 5 14:58:44.330666 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 5 14:58:44.330673 kernel: alternatives: applying boot alternatives Nov 5 14:58:44.330681 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=15758474ef4cace68fb389c1b75e821ab8f30d9b752a28429e0459793723ea7b Nov 5 14:58:44.330697 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 5 14:58:44.330704 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 5 14:58:44.330711 kernel: Fallback order for Node 0: 0 Nov 5 14:58:44.330718 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Nov 5 14:58:44.330727 kernel: Policy zone: DMA Nov 5 14:58:44.330734 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 5 14:58:44.330740 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Nov 5 14:58:44.330747 kernel: software IO TLB: area num 4. Nov 5 14:58:44.330754 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Nov 5 14:58:44.330761 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Nov 5 14:58:44.330768 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 5 14:58:44.330775 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 5 14:58:44.330782 kernel: rcu: RCU event tracing is enabled. Nov 5 14:58:44.330789 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 5 14:58:44.330796 kernel: Trampoline variant of Tasks RCU enabled. Nov 5 14:58:44.330805 kernel: Tracing variant of Tasks RCU enabled. Nov 5 14:58:44.330812 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 5 14:58:44.330818 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 5 14:58:44.330825 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 14:58:44.330832 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 5 14:58:44.330839 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 5 14:58:44.330846 kernel: GICv3: 256 SPIs implemented Nov 5 14:58:44.330853 kernel: GICv3: 0 Extended SPIs implemented Nov 5 14:58:44.330859 kernel: Root IRQ handler: gic_handle_irq Nov 5 14:58:44.330866 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 5 14:58:44.330873 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Nov 5 14:58:44.330881 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 5 14:58:44.330888 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 5 14:58:44.330895 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Nov 5 14:58:44.330902 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Nov 5 14:58:44.330908 kernel: GICv3: using LPI property table @0x0000000040130000 Nov 5 14:58:44.330915 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Nov 5 14:58:44.330922 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 5 14:58:44.330929 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 14:58:44.330936 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 5 14:58:44.330943 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 5 14:58:44.330950 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 5 14:58:44.330958 kernel: arm-pv: using stolen time PV Nov 5 14:58:44.330965 kernel: Console: colour dummy device 80x25 Nov 5 14:58:44.330973 kernel: ACPI: Core revision 20240827 Nov 5 14:58:44.330980 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 5 14:58:44.330987 kernel: pid_max: default: 32768 minimum: 301 Nov 5 14:58:44.330994 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Nov 5 14:58:44.331001 kernel: landlock: Up and running. Nov 5 14:58:44.331009 kernel: SELinux: Initializing. Nov 5 14:58:44.331017 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 14:58:44.331025 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 5 14:58:44.331032 kernel: rcu: Hierarchical SRCU implementation. Nov 5 14:58:44.331039 kernel: rcu: Max phase no-delay instances is 400. Nov 5 14:58:44.331047 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Nov 5 14:58:44.331054 kernel: Remapping and enabling EFI services. Nov 5 14:58:44.331061 kernel: smp: Bringing up secondary CPUs ... Nov 5 14:58:44.331070 kernel: Detected PIPT I-cache on CPU1 Nov 5 14:58:44.331081 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 5 14:58:44.331090 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Nov 5 14:58:44.331098 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 14:58:44.331105 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 5 14:58:44.331113 kernel: Detected PIPT I-cache on CPU2 Nov 5 14:58:44.331120 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 5 14:58:44.331129 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Nov 5 14:58:44.331137 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 14:58:44.331144 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 5 14:58:44.331151 kernel: Detected PIPT I-cache on CPU3 Nov 5 14:58:44.331159 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 5 14:58:44.331167 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Nov 5 14:58:44.331175 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 5 14:58:44.331183 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 5 14:58:44.331191 kernel: smp: Brought up 1 node, 4 CPUs Nov 5 14:58:44.331198 kernel: SMP: Total of 4 processors activated. Nov 5 14:58:44.331205 kernel: CPU: All CPU(s) started at EL1 Nov 5 14:58:44.331213 kernel: CPU features: detected: 32-bit EL0 Support Nov 5 14:58:44.331221 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 5 14:58:44.331228 kernel: CPU features: detected: Common not Private translations Nov 5 14:58:44.331248 kernel: CPU features: detected: CRC32 instructions Nov 5 14:58:44.331256 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 5 14:58:44.331268 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 5 14:58:44.331276 kernel: CPU features: detected: LSE atomic instructions Nov 5 14:58:44.331283 kernel: CPU features: detected: Privileged Access Never Nov 5 14:58:44.331290 kernel: CPU features: detected: RAS Extension Support Nov 5 14:58:44.331298 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 5 14:58:44.331305 kernel: alternatives: applying system-wide alternatives Nov 5 14:58:44.331314 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Nov 5 14:58:44.331322 kernel: Memory: 2450396K/2572288K available (11136K kernel code, 2456K rwdata, 9084K rodata, 12992K init, 1038K bss, 99556K reserved, 16384K cma-reserved) Nov 5 14:58:44.331330 kernel: devtmpfs: initialized Nov 5 14:58:44.331337 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 5 14:58:44.331345 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 5 14:58:44.331353 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 5 14:58:44.331361 kernel: 0 pages in range for non-PLT usage Nov 5 14:58:44.331369 kernel: 515056 pages in range for PLT usage Nov 5 14:58:44.331377 kernel: pinctrl core: initialized pinctrl subsystem Nov 5 14:58:44.331384 kernel: SMBIOS 3.0.0 present. Nov 5 14:58:44.331391 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 5 14:58:44.331399 kernel: DMI: Memory slots populated: 1/1 Nov 5 14:58:44.331406 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 5 14:58:44.331414 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 5 14:58:44.331422 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 5 14:58:44.331430 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 5 14:58:44.331438 kernel: audit: initializing netlink subsys (disabled) Nov 5 14:58:44.331446 kernel: audit: type=2000 audit(0.016:1): state=initialized audit_enabled=0 res=1 Nov 5 14:58:44.331453 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 5 14:58:44.331461 kernel: cpuidle: using governor menu Nov 5 14:58:44.331469 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 5 14:58:44.331478 kernel: ASID allocator initialised with 32768 entries Nov 5 14:58:44.331486 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 5 14:58:44.331493 kernel: Serial: AMBA PL011 UART driver Nov 5 14:58:44.331517 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 5 14:58:44.331524 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 5 14:58:44.331532 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 5 14:58:44.331540 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 5 14:58:44.331548 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 5 14:58:44.331558 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 5 14:58:44.331566 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 5 14:58:44.331574 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 5 14:58:44.331581 kernel: ACPI: Added _OSI(Module Device) Nov 5 14:58:44.331589 kernel: ACPI: Added _OSI(Processor Device) Nov 5 14:58:44.331597 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 5 14:58:44.331604 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 5 14:58:44.331613 kernel: ACPI: Interpreter enabled Nov 5 14:58:44.331621 kernel: ACPI: Using GIC for interrupt routing Nov 5 14:58:44.331629 kernel: ACPI: MCFG table detected, 1 entries Nov 5 14:58:44.331637 kernel: ACPI: CPU0 has been hot-added Nov 5 14:58:44.331645 kernel: ACPI: CPU1 has been hot-added Nov 5 14:58:44.331652 kernel: ACPI: CPU2 has been hot-added Nov 5 14:58:44.331660 kernel: ACPI: CPU3 has been hot-added Nov 5 14:58:44.331669 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 5 14:58:44.331676 kernel: printk: legacy console [ttyAMA0] enabled Nov 5 14:58:44.331694 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 5 14:58:44.331864 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 5 14:58:44.331964 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 5 14:58:44.332056 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 5 14:58:44.332178 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 5 14:58:44.332267 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 5 14:58:44.332278 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 5 14:58:44.332286 kernel: PCI host bridge to bus 0000:00 Nov 5 14:58:44.332384 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 5 14:58:44.332462 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 5 14:58:44.332539 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 5 14:58:44.332612 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 5 14:58:44.332724 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Nov 5 14:58:44.332821 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Nov 5 14:58:44.332910 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Nov 5 14:58:44.332991 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Nov 5 14:58:44.333074 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Nov 5 14:58:44.333154 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Nov 5 14:58:44.333235 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Nov 5 14:58:44.333324 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Nov 5 14:58:44.333401 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 5 14:58:44.333473 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 5 14:58:44.333548 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 5 14:58:44.333558 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 5 14:58:44.333566 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 5 14:58:44.333574 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 5 14:58:44.333581 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 5 14:58:44.333589 kernel: iommu: Default domain type: Translated Nov 5 14:58:44.333598 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 5 14:58:44.333606 kernel: efivars: Registered efivars operations Nov 5 14:58:44.333613 kernel: vgaarb: loaded Nov 5 14:58:44.333621 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 5 14:58:44.333628 kernel: VFS: Disk quotas dquot_6.6.0 Nov 5 14:58:44.333636 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 5 14:58:44.333644 kernel: pnp: PnP ACPI init Nov 5 14:58:44.333745 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 5 14:58:44.333756 kernel: pnp: PnP ACPI: found 1 devices Nov 5 14:58:44.333764 kernel: NET: Registered PF_INET protocol family Nov 5 14:58:44.333771 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 5 14:58:44.333779 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 5 14:58:44.333787 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 5 14:58:44.333795 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 5 14:58:44.333804 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 5 14:58:44.333812 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 5 14:58:44.333820 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 14:58:44.333827 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 5 14:58:44.333835 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 5 14:58:44.333843 kernel: PCI: CLS 0 bytes, default 64 Nov 5 14:58:44.333851 kernel: kvm [1]: HYP mode not available Nov 5 14:58:44.333860 kernel: Initialise system trusted keyrings Nov 5 14:58:44.333868 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 5 14:58:44.333876 kernel: Key type asymmetric registered Nov 5 14:58:44.333883 kernel: Asymmetric key parser 'x509' registered Nov 5 14:58:44.333891 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Nov 5 14:58:44.333899 kernel: io scheduler mq-deadline registered Nov 5 14:58:44.333907 kernel: io scheduler kyber registered Nov 5 14:58:44.333915 kernel: io scheduler bfq registered Nov 5 14:58:44.333931 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 5 14:58:44.333939 kernel: ACPI: button: Power Button [PWRB] Nov 5 14:58:44.333947 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 5 14:58:44.334031 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 5 14:58:44.334041 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 5 14:58:44.334049 kernel: thunder_xcv, ver 1.0 Nov 5 14:58:44.334058 kernel: thunder_bgx, ver 1.0 Nov 5 14:58:44.334066 kernel: nicpf, ver 1.0 Nov 5 14:58:44.334073 kernel: nicvf, ver 1.0 Nov 5 14:58:44.334186 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 5 14:58:44.334277 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-05T14:58:43 UTC (1762354723) Nov 5 14:58:44.334288 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 5 14:58:44.334298 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Nov 5 14:58:44.334306 kernel: watchdog: NMI not fully supported Nov 5 14:58:44.334313 kernel: watchdog: Hard watchdog permanently disabled Nov 5 14:58:44.334321 kernel: NET: Registered PF_INET6 protocol family Nov 5 14:58:44.334328 kernel: Segment Routing with IPv6 Nov 5 14:58:44.334336 kernel: In-situ OAM (IOAM) with IPv6 Nov 5 14:58:44.334344 kernel: NET: Registered PF_PACKET protocol family Nov 5 14:58:44.334351 kernel: Key type dns_resolver registered Nov 5 14:58:44.334360 kernel: registered taskstats version 1 Nov 5 14:58:44.334367 kernel: Loading compiled-in X.509 certificates Nov 5 14:58:44.334375 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.54-flatcar: 4b3babb46eb583bd8b0310732885d24e60ea58c5' Nov 5 14:58:44.334383 kernel: Demotion targets for Node 0: null Nov 5 14:58:44.334390 kernel: Key type .fscrypt registered Nov 5 14:58:44.334398 kernel: Key type fscrypt-provisioning registered Nov 5 14:58:44.334406 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 5 14:58:44.334415 kernel: ima: Allocated hash algorithm: sha1 Nov 5 14:58:44.334422 kernel: ima: No architecture policies found Nov 5 14:58:44.334430 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 5 14:58:44.334438 kernel: clk: Disabling unused clocks Nov 5 14:58:44.334445 kernel: PM: genpd: Disabling unused power domains Nov 5 14:58:44.334453 kernel: Freeing unused kernel memory: 12992K Nov 5 14:58:44.334460 kernel: Run /init as init process Nov 5 14:58:44.334469 kernel: with arguments: Nov 5 14:58:44.334476 kernel: /init Nov 5 14:58:44.334484 kernel: with environment: Nov 5 14:58:44.334491 kernel: HOME=/ Nov 5 14:58:44.334499 kernel: TERM=linux Nov 5 14:58:44.334593 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 5 14:58:44.334672 kernel: virtio_blk virtio1: [vda] 27000832 512-byte logical blocks (13.8 GB/12.9 GiB) Nov 5 14:58:44.334691 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 5 14:58:44.334699 kernel: GPT:16515071 != 27000831 Nov 5 14:58:44.334718 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 5 14:58:44.334725 kernel: GPT:16515071 != 27000831 Nov 5 14:58:44.334733 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 5 14:58:44.334741 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 5 14:58:44.334751 kernel: SCSI subsystem initialized Nov 5 14:58:44.334759 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 5 14:58:44.334766 kernel: device-mapper: uevent: version 1.0.3 Nov 5 14:58:44.334774 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Nov 5 14:58:44.334782 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Nov 5 14:58:44.334789 kernel: raid6: neonx8 gen() 15776 MB/s Nov 5 14:58:44.334797 kernel: raid6: neonx4 gen() 15758 MB/s Nov 5 14:58:44.334805 kernel: raid6: neonx2 gen() 13218 MB/s Nov 5 14:58:44.334814 kernel: raid6: neonx1 gen() 10375 MB/s Nov 5 14:58:44.334822 kernel: raid6: int64x8 gen() 6878 MB/s Nov 5 14:58:44.334829 kernel: raid6: int64x4 gen() 7293 MB/s Nov 5 14:58:44.334837 kernel: raid6: int64x2 gen() 6013 MB/s Nov 5 14:58:44.334844 kernel: raid6: int64x1 gen() 5037 MB/s Nov 5 14:58:44.334852 kernel: raid6: using algorithm neonx8 gen() 15776 MB/s Nov 5 14:58:44.334861 kernel: raid6: .... xor() 11923 MB/s, rmw enabled Nov 5 14:58:44.334868 kernel: raid6: using neon recovery algorithm Nov 5 14:58:44.334876 kernel: xor: measuring software checksum speed Nov 5 14:58:44.334884 kernel: 8regs : 21579 MB/sec Nov 5 14:58:44.334892 kernel: 32regs : 21676 MB/sec Nov 5 14:58:44.334900 kernel: arm64_neon : 26682 MB/sec Nov 5 14:58:44.334907 kernel: xor: using function: arm64_neon (26682 MB/sec) Nov 5 14:58:44.334915 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 5 14:58:44.334925 kernel: BTRFS: device fsid d8f84a83-fd8b-4c0e-831a-0d7c5ff234be devid 1 transid 36 /dev/mapper/usr (253:0) scanned by mount (204) Nov 5 14:58:44.334933 kernel: BTRFS info (device dm-0): first mount of filesystem d8f84a83-fd8b-4c0e-831a-0d7c5ff234be Nov 5 14:58:44.334941 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 5 14:58:44.334949 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 5 14:58:44.334957 kernel: BTRFS info (device dm-0): enabling free space tree Nov 5 14:58:44.334965 kernel: loop: module loaded Nov 5 14:58:44.334972 kernel: loop0: detected capacity change from 0 to 91464 Nov 5 14:58:44.334982 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 5 14:58:44.334991 systemd[1]: Successfully made /usr/ read-only. Nov 5 14:58:44.335002 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 14:58:44.335011 systemd[1]: Detected virtualization kvm. Nov 5 14:58:44.335019 systemd[1]: Detected architecture arm64. Nov 5 14:58:44.335028 systemd[1]: Running in initrd. Nov 5 14:58:44.335036 systemd[1]: No hostname configured, using default hostname. Nov 5 14:58:44.335045 systemd[1]: Hostname set to . Nov 5 14:58:44.335053 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 14:58:44.335061 systemd[1]: Queued start job for default target initrd.target. Nov 5 14:58:44.335069 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 14:58:44.335077 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 14:58:44.335088 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 14:58:44.335097 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 5 14:58:44.335105 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 14:58:44.335114 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 5 14:58:44.335122 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 5 14:58:44.335147 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 14:58:44.335156 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 14:58:44.335165 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Nov 5 14:58:44.335173 systemd[1]: Reached target paths.target - Path Units. Nov 5 14:58:44.335181 systemd[1]: Reached target slices.target - Slice Units. Nov 5 14:58:44.335189 systemd[1]: Reached target swap.target - Swaps. Nov 5 14:58:44.335198 systemd[1]: Reached target timers.target - Timer Units. Nov 5 14:58:44.335207 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 14:58:44.335216 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 14:58:44.335224 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 5 14:58:44.335232 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 5 14:58:44.335247 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 14:58:44.335259 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 14:58:44.335276 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 14:58:44.335285 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 14:58:44.335293 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 5 14:58:44.335304 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 5 14:58:44.335312 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 14:58:44.335321 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 5 14:58:44.335331 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Nov 5 14:58:44.335340 systemd[1]: Starting systemd-fsck-usr.service... Nov 5 14:58:44.335349 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 14:58:44.335357 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 14:58:44.335365 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 14:58:44.335376 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 5 14:58:44.335384 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 14:58:44.335393 systemd[1]: Finished systemd-fsck-usr.service. Nov 5 14:58:44.335401 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 5 14:58:44.335429 systemd-journald[347]: Collecting audit messages is disabled. Nov 5 14:58:44.335451 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 5 14:58:44.335460 systemd-journald[347]: Journal started Nov 5 14:58:44.335480 systemd-journald[347]: Runtime Journal (/run/log/journal/551a6bac5ddc48bb834acdf9fa8f36b6) is 6M, max 48.5M, 42.4M free. Nov 5 14:58:44.338244 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 14:58:44.338284 kernel: Bridge firewalling registered Nov 5 14:58:44.338322 systemd-modules-load[348]: Inserted module 'br_netfilter' Nov 5 14:58:44.339970 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 14:58:44.342596 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 5 14:58:44.346414 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 14:58:44.348131 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 14:58:44.351848 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 14:58:44.355238 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 14:58:44.358405 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 5 14:58:44.363670 systemd-tmpfiles[368]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Nov 5 14:58:44.367227 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 14:58:44.370128 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 14:58:44.372150 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 14:58:44.375653 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 14:58:44.390854 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 14:58:44.393818 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 5 14:58:44.425937 systemd-resolved[384]: Positive Trust Anchors: Nov 5 14:58:44.425953 systemd-resolved[384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 14:58:44.425956 systemd-resolved[384]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 14:58:44.425988 systemd-resolved[384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 14:58:44.438707 dracut-cmdline[389]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=15758474ef4cace68fb389c1b75e821ab8f30d9b752a28429e0459793723ea7b Nov 5 14:58:44.448349 systemd-resolved[384]: Defaulting to hostname 'linux'. Nov 5 14:58:44.449459 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 14:58:44.450719 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 14:58:44.509708 kernel: Loading iSCSI transport class v2.0-870. Nov 5 14:58:44.515704 kernel: iscsi: registered transport (tcp) Nov 5 14:58:44.528926 kernel: iscsi: registered transport (qla4xxx) Nov 5 14:58:44.528979 kernel: QLogic iSCSI HBA Driver Nov 5 14:58:44.548915 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 14:58:44.574796 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 14:58:44.577118 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 14:58:44.619859 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 5 14:58:44.622961 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 5 14:58:44.624460 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 5 14:58:44.659747 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 5 14:58:44.662568 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 14:58:44.689800 systemd-udevd[628]: Using default interface naming scheme 'v257'. Nov 5 14:58:44.697650 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 14:58:44.701032 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 5 14:58:44.723779 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 14:58:44.728171 dracut-pre-trigger[696]: rd.md=0: removing MD RAID activation Nov 5 14:58:44.728880 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 14:58:44.750752 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 14:58:44.754004 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 14:58:44.771355 systemd-networkd[738]: lo: Link UP Nov 5 14:58:44.771363 systemd-networkd[738]: lo: Gained carrier Nov 5 14:58:44.771801 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 14:58:44.773333 systemd[1]: Reached target network.target - Network. Nov 5 14:58:44.805303 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 14:58:44.808186 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 5 14:58:44.859594 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 5 14:58:44.868023 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 5 14:58:44.874769 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 5 14:58:44.884200 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 14:58:44.887173 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 5 14:58:44.893828 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 14:58:44.893948 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 14:58:44.897540 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 14:58:44.901629 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 14:58:44.903978 systemd-networkd[738]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 14:58:44.903982 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 14:58:44.911077 disk-uuid[802]: Primary Header is updated. Nov 5 14:58:44.911077 disk-uuid[802]: Secondary Entries is updated. Nov 5 14:58:44.911077 disk-uuid[802]: Secondary Header is updated. Nov 5 14:58:44.905653 systemd-networkd[738]: eth0: Link UP Nov 5 14:58:44.905829 systemd-networkd[738]: eth0: Gained carrier Nov 5 14:58:44.905839 systemd-networkd[738]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 14:58:44.918607 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 5 14:58:44.920738 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 14:58:44.921416 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 14:58:44.925101 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 14:58:44.929067 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 14:58:44.933904 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 5 14:58:44.948886 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 14:58:44.963561 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 5 14:58:45.944732 disk-uuid[803]: Warning: The kernel is still using the old partition table. Nov 5 14:58:45.944732 disk-uuid[803]: The new table will be used at the next reboot or after you Nov 5 14:58:45.944732 disk-uuid[803]: run partprobe(8) or kpartx(8) Nov 5 14:58:45.944732 disk-uuid[803]: The operation has completed successfully. Nov 5 14:58:45.953740 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 5 14:58:45.954803 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 5 14:58:45.957054 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 5 14:58:45.987719 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (832) Nov 5 14:58:45.990598 kernel: BTRFS info (device vda6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 14:58:45.990622 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 14:58:45.993174 kernel: BTRFS info (device vda6): turning on async discard Nov 5 14:58:45.993202 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 14:58:45.998698 kernel: BTRFS info (device vda6): last unmount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 14:58:45.999192 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 5 14:58:46.001747 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 5 14:58:46.110810 ignition[851]: Ignition 2.22.0 Nov 5 14:58:46.110822 ignition[851]: Stage: fetch-offline Nov 5 14:58:46.110857 ignition[851]: no configs at "/usr/lib/ignition/base.d" Nov 5 14:58:46.110866 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:58:46.110948 ignition[851]: parsed url from cmdline: "" Nov 5 14:58:46.110952 ignition[851]: no config URL provided Nov 5 14:58:46.110957 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Nov 5 14:58:46.110965 ignition[851]: no config at "/usr/lib/ignition/user.ign" Nov 5 14:58:46.111002 ignition[851]: op(1): [started] loading QEMU firmware config module Nov 5 14:58:46.111007 ignition[851]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 5 14:58:46.120853 ignition[851]: op(1): [finished] loading QEMU firmware config module Nov 5 14:58:46.120878 ignition[851]: QEMU firmware config was not found. Ignoring... Nov 5 14:58:46.163704 ignition[851]: parsing config with SHA512: a8c335f3013ab6d0d0e6ab2310cf2c83c0938972674ae0d567e611fbe698b26472833c3b1dea1fb2c2b9e73796c4d7a159e9dc7717fd9307a51b12894632e4d5 Nov 5 14:58:46.169985 unknown[851]: fetched base config from "system" Nov 5 14:58:46.170000 unknown[851]: fetched user config from "qemu" Nov 5 14:58:46.170474 ignition[851]: fetch-offline: fetch-offline passed Nov 5 14:58:46.172457 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 14:58:46.170534 ignition[851]: Ignition finished successfully Nov 5 14:58:46.173984 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 5 14:58:46.174763 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 5 14:58:46.203107 ignition[867]: Ignition 2.22.0 Nov 5 14:58:46.203122 ignition[867]: Stage: kargs Nov 5 14:58:46.203269 ignition[867]: no configs at "/usr/lib/ignition/base.d" Nov 5 14:58:46.203277 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:58:46.204111 ignition[867]: kargs: kargs passed Nov 5 14:58:46.204155 ignition[867]: Ignition finished successfully Nov 5 14:58:46.207781 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 5 14:58:46.210174 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 5 14:58:46.240996 ignition[875]: Ignition 2.22.0 Nov 5 14:58:46.241014 ignition[875]: Stage: disks Nov 5 14:58:46.241146 ignition[875]: no configs at "/usr/lib/ignition/base.d" Nov 5 14:58:46.241154 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:58:46.241908 ignition[875]: disks: disks passed Nov 5 14:58:46.244090 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 5 14:58:46.241951 ignition[875]: Ignition finished successfully Nov 5 14:58:46.245600 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 5 14:58:46.247054 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 5 14:58:46.249087 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 14:58:46.250755 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 14:58:46.252736 systemd[1]: Reached target basic.target - Basic System. Nov 5 14:58:46.255678 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 5 14:58:46.287932 systemd-fsck[885]: ROOT: clean, 15/456736 files, 38230/456704 blocks Nov 5 14:58:46.291935 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 5 14:58:46.296166 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 5 14:58:46.363709 kernel: EXT4-fs (vda9): mounted filesystem 67ab558f-e1dc-496b-b18a-e9709809a3c4 r/w with ordered data mode. Quota mode: none. Nov 5 14:58:46.364488 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 5 14:58:46.365817 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 5 14:58:46.368388 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 14:58:46.370011 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 5 14:58:46.371027 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 5 14:58:46.371061 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 5 14:58:46.371086 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 14:58:46.390340 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 5 14:58:46.394704 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (893) Nov 5 14:58:46.394720 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 5 14:58:46.398875 kernel: BTRFS info (device vda6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 14:58:46.398896 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 14:58:46.402436 kernel: BTRFS info (device vda6): turning on async discard Nov 5 14:58:46.402472 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 14:58:46.402424 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 14:58:46.432179 initrd-setup-root[917]: cut: /sysroot/etc/passwd: No such file or directory Nov 5 14:58:46.436592 initrd-setup-root[924]: cut: /sysroot/etc/group: No such file or directory Nov 5 14:58:46.440617 initrd-setup-root[931]: cut: /sysroot/etc/shadow: No such file or directory Nov 5 14:58:46.444199 initrd-setup-root[938]: cut: /sysroot/etc/gshadow: No such file or directory Nov 5 14:58:46.512091 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 5 14:58:46.514745 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 5 14:58:46.516569 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 5 14:58:46.536970 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 5 14:58:46.539735 kernel: BTRFS info (device vda6): last unmount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 14:58:46.553830 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 5 14:58:46.569489 ignition[1007]: INFO : Ignition 2.22.0 Nov 5 14:58:46.569489 ignition[1007]: INFO : Stage: mount Nov 5 14:58:46.571275 ignition[1007]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 14:58:46.571275 ignition[1007]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:58:46.571275 ignition[1007]: INFO : mount: mount passed Nov 5 14:58:46.571275 ignition[1007]: INFO : Ignition finished successfully Nov 5 14:58:46.574982 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 5 14:58:46.577527 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 5 14:58:46.667876 systemd-networkd[738]: eth0: Gained IPv6LL Nov 5 14:58:47.366105 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 5 14:58:47.395696 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1019) Nov 5 14:58:47.398032 kernel: BTRFS info (device vda6): first mount of filesystem 53018052-4eb1-4655-a725-a5d3199d5804 Nov 5 14:58:47.398064 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 5 14:58:47.401259 kernel: BTRFS info (device vda6): turning on async discard Nov 5 14:58:47.401313 kernel: BTRFS info (device vda6): enabling free space tree Nov 5 14:58:47.402620 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 5 14:58:47.429771 ignition[1036]: INFO : Ignition 2.22.0 Nov 5 14:58:47.429771 ignition[1036]: INFO : Stage: files Nov 5 14:58:47.431582 ignition[1036]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 14:58:47.431582 ignition[1036]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:58:47.431582 ignition[1036]: DEBUG : files: compiled without relabeling support, skipping Nov 5 14:58:47.435350 ignition[1036]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 5 14:58:47.435350 ignition[1036]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 5 14:58:47.438801 ignition[1036]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 5 14:58:47.440352 ignition[1036]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 5 14:58:47.440352 ignition[1036]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 5 14:58:47.439448 unknown[1036]: wrote ssh authorized keys file for user: core Nov 5 14:58:47.445104 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 5 14:58:47.445104 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 5 14:58:47.490515 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 5 14:58:47.694811 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 5 14:58:47.694811 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 5 14:58:47.699097 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 5 14:58:48.559067 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 5 14:58:48.957982 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 5 14:58:48.957982 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 5 14:58:48.961812 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 5 14:58:48.961812 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 5 14:58:48.961812 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 5 14:58:48.961812 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 14:58:48.961812 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 5 14:58:48.961812 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 14:58:48.961812 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 5 14:58:48.961812 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 14:58:48.961812 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 5 14:58:48.961812 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 5 14:58:48.978881 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 5 14:58:48.978881 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 5 14:58:48.978881 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 5 14:58:49.342138 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 5 14:58:49.593339 ignition[1036]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 5 14:58:49.593339 ignition[1036]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 5 14:58:49.597767 ignition[1036]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 14:58:49.597767 ignition[1036]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 5 14:58:49.597767 ignition[1036]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 5 14:58:49.597767 ignition[1036]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 5 14:58:49.597767 ignition[1036]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 14:58:49.597767 ignition[1036]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 5 14:58:49.597767 ignition[1036]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 5 14:58:49.597767 ignition[1036]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 5 14:58:49.612201 ignition[1036]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 14:58:49.615310 ignition[1036]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 5 14:58:49.618109 ignition[1036]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 5 14:58:49.618109 ignition[1036]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 5 14:58:49.618109 ignition[1036]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 5 14:58:49.618109 ignition[1036]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 5 14:58:49.618109 ignition[1036]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 5 14:58:49.618109 ignition[1036]: INFO : files: files passed Nov 5 14:58:49.618109 ignition[1036]: INFO : Ignition finished successfully Nov 5 14:58:49.622743 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 5 14:58:49.625550 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 5 14:58:49.628080 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 5 14:58:49.637310 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 5 14:58:49.637460 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 5 14:58:49.641020 initrd-setup-root-after-ignition[1067]: grep: /sysroot/oem/oem-release: No such file or directory Nov 5 14:58:49.643593 initrd-setup-root-after-ignition[1069]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 14:58:49.643593 initrd-setup-root-after-ignition[1069]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 5 14:58:49.646950 initrd-setup-root-after-ignition[1073]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 5 14:58:49.646519 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 14:58:49.649282 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 5 14:58:49.652230 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 5 14:58:49.695962 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 5 14:58:49.696758 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 5 14:58:49.698460 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 5 14:58:49.700461 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 5 14:58:49.702772 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 5 14:58:49.703732 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 5 14:58:49.733878 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 14:58:49.736624 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 5 14:58:49.756167 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Nov 5 14:58:49.756388 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 5 14:58:49.758743 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 14:58:49.760993 systemd[1]: Stopped target timers.target - Timer Units. Nov 5 14:58:49.762818 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 5 14:58:49.762960 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 5 14:58:49.765675 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 5 14:58:49.767839 systemd[1]: Stopped target basic.target - Basic System. Nov 5 14:58:49.769704 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 5 14:58:49.771640 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 5 14:58:49.773883 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 5 14:58:49.775991 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Nov 5 14:58:49.778095 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 5 14:58:49.780035 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 5 14:58:49.782126 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 5 14:58:49.784271 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 5 14:58:49.786105 systemd[1]: Stopped target swap.target - Swaps. Nov 5 14:58:49.787704 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 5 14:58:49.787846 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 5 14:58:49.790388 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 5 14:58:49.792626 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 14:58:49.794855 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 5 14:58:49.795871 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 14:58:49.797216 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 5 14:58:49.797360 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 5 14:58:49.800448 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 5 14:58:49.800597 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 5 14:58:49.802759 systemd[1]: Stopped target paths.target - Path Units. Nov 5 14:58:49.804490 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 5 14:58:49.805390 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 14:58:49.806855 systemd[1]: Stopped target slices.target - Slice Units. Nov 5 14:58:49.808773 systemd[1]: Stopped target sockets.target - Socket Units. Nov 5 14:58:49.810570 systemd[1]: iscsid.socket: Deactivated successfully. Nov 5 14:58:49.810664 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 5 14:58:49.812830 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 5 14:58:49.812924 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 5 14:58:49.815438 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 5 14:58:49.815566 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 5 14:58:49.817565 systemd[1]: ignition-files.service: Deactivated successfully. Nov 5 14:58:49.817701 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 5 14:58:49.820231 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 5 14:58:49.821821 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 5 14:58:49.821969 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 14:58:49.824776 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 5 14:58:49.826724 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 5 14:58:49.826874 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 14:58:49.829457 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 5 14:58:49.829573 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 14:58:49.831485 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 5 14:58:49.831598 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 5 14:58:49.841039 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 5 14:58:49.841131 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 5 14:58:49.848455 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 5 14:58:49.855278 ignition[1093]: INFO : Ignition 2.22.0 Nov 5 14:58:49.855278 ignition[1093]: INFO : Stage: umount Nov 5 14:58:49.857277 ignition[1093]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 5 14:58:49.857277 ignition[1093]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 5 14:58:49.857277 ignition[1093]: INFO : umount: umount passed Nov 5 14:58:49.857277 ignition[1093]: INFO : Ignition finished successfully Nov 5 14:58:49.858279 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 5 14:58:49.859733 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 5 14:58:49.861296 systemd[1]: Stopped target network.target - Network. Nov 5 14:58:49.862728 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 5 14:58:49.862802 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 5 14:58:49.864787 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 5 14:58:49.864847 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 5 14:58:49.866701 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 5 14:58:49.866763 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 5 14:58:49.868745 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 5 14:58:49.868795 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 5 14:58:49.870831 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 5 14:58:49.872894 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 5 14:58:49.880977 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 5 14:58:49.881094 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 5 14:58:49.888460 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 5 14:58:49.888603 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 5 14:58:49.892453 systemd[1]: Stopped target network-pre.target - Preparation for Network. Nov 5 14:58:49.894588 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 5 14:58:49.894631 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 5 14:58:49.897719 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 5 14:58:49.898654 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 5 14:58:49.898752 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 5 14:58:49.901664 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 14:58:49.901737 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 14:58:49.904488 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 5 14:58:49.904539 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 5 14:58:49.906633 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 14:58:49.910898 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 5 14:58:49.910981 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 5 14:58:49.914027 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 5 14:58:49.914127 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 5 14:58:49.921719 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 5 14:58:49.921881 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 14:58:49.926050 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 5 14:58:49.926109 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 5 14:58:49.927973 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 5 14:58:49.928010 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 14:58:49.930084 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 5 14:58:49.930154 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 5 14:58:49.932929 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 5 14:58:49.932988 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 5 14:58:49.935738 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 5 14:58:49.935804 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 5 14:58:49.941477 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 5 14:58:49.942842 systemd[1]: systemd-network-generator.service: Deactivated successfully. Nov 5 14:58:49.942925 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 14:58:49.945400 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 5 14:58:49.945461 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 14:58:49.947847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 5 14:58:49.947901 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 14:58:49.950662 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 5 14:58:49.954850 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 5 14:58:49.961140 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 5 14:58:49.961258 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 5 14:58:49.963785 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 5 14:58:49.966583 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 5 14:58:49.987312 systemd[1]: Switching root. Nov 5 14:58:50.027072 systemd-journald[347]: Journal stopped Nov 5 14:58:50.834021 systemd-journald[347]: Received SIGTERM from PID 1 (systemd). Nov 5 14:58:50.834069 kernel: SELinux: policy capability network_peer_controls=1 Nov 5 14:58:50.834082 kernel: SELinux: policy capability open_perms=1 Nov 5 14:58:50.834095 kernel: SELinux: policy capability extended_socket_class=1 Nov 5 14:58:50.834110 kernel: SELinux: policy capability always_check_network=0 Nov 5 14:58:50.834121 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 5 14:58:50.834132 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 5 14:58:50.834149 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 5 14:58:50.834160 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 5 14:58:50.834170 kernel: SELinux: policy capability userspace_initial_context=0 Nov 5 14:58:50.834183 kernel: audit: type=1403 audit(1762354730.235:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 5 14:58:50.834194 systemd[1]: Successfully loaded SELinux policy in 59.184ms. Nov 5 14:58:50.834207 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.985ms. Nov 5 14:58:50.834219 systemd[1]: systemd 257.7 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 5 14:58:50.834232 systemd[1]: Detected virtualization kvm. Nov 5 14:58:50.834242 systemd[1]: Detected architecture arm64. Nov 5 14:58:50.834253 systemd[1]: Detected first boot. Nov 5 14:58:50.834266 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Nov 5 14:58:50.834277 zram_generator::config[1140]: No configuration found. Nov 5 14:58:50.834289 kernel: NET: Registered PF_VSOCK protocol family Nov 5 14:58:50.834300 systemd[1]: Populated /etc with preset unit settings. Nov 5 14:58:50.834313 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 5 14:58:50.834327 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 5 14:58:50.834338 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 5 14:58:50.834357 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 5 14:58:50.834368 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 5 14:58:50.834379 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 5 14:58:50.834391 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 5 14:58:50.834402 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 5 14:58:50.834413 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 5 14:58:50.834424 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 5 14:58:50.834434 systemd[1]: Created slice user.slice - User and Session Slice. Nov 5 14:58:50.834445 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 5 14:58:50.834456 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 5 14:58:50.834468 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 5 14:58:50.834478 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 5 14:58:50.834489 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 5 14:58:50.834499 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 5 14:58:50.834511 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 5 14:58:50.834522 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 5 14:58:50.834532 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 5 14:58:50.834544 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 5 14:58:50.834555 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 5 14:58:50.834565 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 5 14:58:50.834576 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 5 14:58:50.834587 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 5 14:58:50.834597 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 5 14:58:50.834608 systemd[1]: Reached target slices.target - Slice Units. Nov 5 14:58:50.834620 systemd[1]: Reached target swap.target - Swaps. Nov 5 14:58:50.834631 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 5 14:58:50.834641 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 5 14:58:50.834652 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 5 14:58:50.834662 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 5 14:58:50.834673 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 5 14:58:50.834734 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 5 14:58:50.834750 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 5 14:58:50.834761 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 5 14:58:50.834772 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 5 14:58:50.834782 systemd[1]: Mounting media.mount - External Media Directory... Nov 5 14:58:50.834794 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 5 14:58:50.834805 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 5 14:58:50.834815 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 5 14:58:50.834828 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 5 14:58:50.834839 systemd[1]: Reached target machines.target - Containers. Nov 5 14:58:50.834850 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 5 14:58:50.834861 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 14:58:50.834872 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 5 14:58:50.834883 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 5 14:58:50.834894 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 14:58:50.834906 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 14:58:50.834916 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 14:58:50.834927 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 5 14:58:50.834937 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 14:58:50.834949 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 5 14:58:50.834959 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 5 14:58:50.834971 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 5 14:58:50.834983 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 5 14:58:50.834993 systemd[1]: Stopped systemd-fsck-usr.service. Nov 5 14:58:50.835003 kernel: fuse: init (API version 7.41) Nov 5 14:58:50.835014 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 14:58:50.835025 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 5 14:58:50.835036 kernel: ACPI: bus type drm_connector registered Nov 5 14:58:50.835047 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 5 14:58:50.835057 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 5 14:58:50.835068 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 5 14:58:50.835079 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 5 14:58:50.835090 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 5 14:58:50.835101 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 5 14:58:50.835111 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 5 14:58:50.835123 systemd[1]: Mounted media.mount - External Media Directory. Nov 5 14:58:50.835134 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 5 14:58:50.835145 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 5 14:58:50.835176 systemd-journald[1212]: Collecting audit messages is disabled. Nov 5 14:58:50.835199 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 5 14:58:50.835210 systemd-journald[1212]: Journal started Nov 5 14:58:50.835231 systemd-journald[1212]: Runtime Journal (/run/log/journal/551a6bac5ddc48bb834acdf9fa8f36b6) is 6M, max 48.5M, 42.4M free. Nov 5 14:58:50.603305 systemd[1]: Queued start job for default target multi-user.target. Nov 5 14:58:50.625707 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 5 14:58:50.626157 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 5 14:58:50.838236 systemd[1]: Started systemd-journald.service - Journal Service. Nov 5 14:58:50.840724 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 5 14:58:50.842264 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 5 14:58:50.843914 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 5 14:58:50.844776 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 5 14:58:50.846219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 14:58:50.846403 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 14:58:50.847887 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 14:58:50.848056 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 14:58:50.849547 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 14:58:50.849767 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 14:58:50.851375 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 5 14:58:50.851548 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 5 14:58:50.853096 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 14:58:50.853261 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 14:58:50.854807 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 5 14:58:50.856424 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 5 14:58:50.858647 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 5 14:58:50.861758 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 5 14:58:50.870641 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 5 14:58:50.876623 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 5 14:58:50.878269 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Nov 5 14:58:50.880671 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 5 14:58:50.882676 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 5 14:58:50.883968 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 5 14:58:50.884006 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 5 14:58:50.885978 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 5 14:58:50.887489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 14:58:50.892493 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 5 14:58:50.894797 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 5 14:58:50.896051 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 14:58:50.897003 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 5 14:58:50.898311 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 14:58:50.900869 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 14:58:50.902573 systemd-journald[1212]: Time spent on flushing to /var/log/journal/551a6bac5ddc48bb834acdf9fa8f36b6 is 15.604ms for 874 entries. Nov 5 14:58:50.902573 systemd-journald[1212]: System Journal (/var/log/journal/551a6bac5ddc48bb834acdf9fa8f36b6) is 8M, max 163.5M, 155.5M free. Nov 5 14:58:50.936678 systemd-journald[1212]: Received client request to flush runtime journal. Nov 5 14:58:50.936745 kernel: loop1: detected capacity change from 0 to 100624 Nov 5 14:58:50.904879 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 5 14:58:50.907829 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 5 14:58:50.910155 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 5 14:58:50.911557 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 5 14:58:50.913195 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 5 14:58:50.916314 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 5 14:58:50.921926 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 5 14:58:50.924602 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 14:58:50.939660 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 5 14:58:50.941427 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 5 14:58:50.945711 kernel: loop2: detected capacity change from 0 to 207008 Nov 5 14:58:50.946832 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 5 14:58:50.949455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 5 14:58:50.954767 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 5 14:58:50.959791 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 5 14:58:50.983710 kernel: loop3: detected capacity change from 0 to 119344 Nov 5 14:58:50.986636 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Nov 5 14:58:50.986655 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Nov 5 14:58:50.990115 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 5 14:58:51.002012 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 5 14:58:51.008708 kernel: loop4: detected capacity change from 0 to 100624 Nov 5 14:58:51.014718 kernel: loop5: detected capacity change from 0 to 207008 Nov 5 14:58:51.020773 kernel: loop6: detected capacity change from 0 to 119344 Nov 5 14:58:51.023932 (sd-merge)[1284]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw'. Nov 5 14:58:51.026646 (sd-merge)[1284]: Merged extensions into '/usr'. Nov 5 14:58:51.030597 systemd[1]: Reload requested from client PID 1257 ('systemd-sysext') (unit systemd-sysext.service)... Nov 5 14:58:51.030613 systemd[1]: Reloading... Nov 5 14:58:51.056154 systemd-resolved[1272]: Positive Trust Anchors: Nov 5 14:58:51.056470 systemd-resolved[1272]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 5 14:58:51.056522 systemd-resolved[1272]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Nov 5 14:58:51.056595 systemd-resolved[1272]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 5 14:58:51.063111 systemd-resolved[1272]: Defaulting to hostname 'linux'. Nov 5 14:58:51.085711 zram_generator::config[1317]: No configuration found. Nov 5 14:58:51.216034 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 5 14:58:51.216208 systemd[1]: Reloading finished in 185 ms. Nov 5 14:58:51.232318 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 5 14:58:51.233755 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 5 14:58:51.236772 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 5 14:58:51.248015 systemd[1]: Starting ensure-sysext.service... Nov 5 14:58:51.249970 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 5 14:58:51.258590 systemd[1]: Reload requested from client PID 1347 ('systemctl') (unit ensure-sysext.service)... Nov 5 14:58:51.258606 systemd[1]: Reloading... Nov 5 14:58:51.264097 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Nov 5 14:58:51.264414 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Nov 5 14:58:51.264858 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 5 14:58:51.265152 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 5 14:58:51.265882 systemd-tmpfiles[1348]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 5 14:58:51.266144 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Nov 5 14:58:51.266271 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Nov 5 14:58:51.269786 systemd-tmpfiles[1348]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 14:58:51.269886 systemd-tmpfiles[1348]: Skipping /boot Nov 5 14:58:51.276380 systemd-tmpfiles[1348]: Detected autofs mount point /boot during canonicalization of boot. Nov 5 14:58:51.276498 systemd-tmpfiles[1348]: Skipping /boot Nov 5 14:58:51.311711 zram_generator::config[1381]: No configuration found. Nov 5 14:58:51.437200 systemd[1]: Reloading finished in 178 ms. Nov 5 14:58:51.463381 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 5 14:58:51.487987 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 5 14:58:51.495581 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 14:58:51.497743 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 5 14:58:51.511760 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 5 14:58:51.514393 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 5 14:58:51.519197 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 5 14:58:51.521800 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 5 14:58:51.527120 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 14:58:51.535498 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 14:58:51.540382 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 5 14:58:51.543475 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 14:58:51.544931 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 14:58:51.545059 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 14:58:51.547289 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 14:58:51.547461 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 14:58:51.551534 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 14:58:51.551789 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 14:58:51.554071 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 5 14:58:51.559153 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 5 14:58:51.560384 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 5 14:58:51.562824 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 5 14:58:51.569300 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 5 14:58:51.570790 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 5 14:58:51.570918 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 5 14:58:51.571043 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 5 14:58:51.572222 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 5 14:58:51.574266 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 5 14:58:51.575837 augenrules[1448]: No rules Nov 5 14:58:51.580759 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 5 14:58:51.583211 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 14:58:51.583413 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 14:58:51.585412 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 5 14:58:51.585584 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 5 14:58:51.586161 systemd-udevd[1419]: Using default interface naming scheme 'v257'. Nov 5 14:58:51.588543 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 5 14:58:51.589431 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 5 14:58:51.591193 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 5 14:58:51.591381 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 5 14:58:51.596862 systemd[1]: Finished ensure-sysext.service. Nov 5 14:58:51.604054 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 5 14:58:51.604328 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 5 14:58:51.606493 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 5 14:58:51.608543 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 5 14:58:51.610336 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 5 14:58:51.621733 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 5 14:58:51.647905 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 5 14:58:51.688848 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 5 14:58:51.703100 systemd[1]: Reached target time-set.target - System Time Set. Nov 5 14:58:51.723987 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 5 14:58:51.725089 systemd-networkd[1476]: lo: Link UP Nov 5 14:58:51.725106 systemd-networkd[1476]: lo: Gained carrier Nov 5 14:58:51.727158 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 5 14:58:51.728145 systemd-networkd[1476]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 14:58:51.728160 systemd-networkd[1476]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 5 14:58:51.729075 systemd-networkd[1476]: eth0: Link UP Nov 5 14:58:51.729190 systemd-networkd[1476]: eth0: Gained carrier Nov 5 14:58:51.729209 systemd-networkd[1476]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Nov 5 14:58:51.730039 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 5 14:58:51.731594 systemd[1]: Reached target network.target - Network. Nov 5 14:58:51.735029 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 5 14:58:51.737952 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 5 14:58:51.746768 systemd-networkd[1476]: eth0: DHCPv4 address 10.0.0.22/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 5 14:58:51.747442 systemd-timesyncd[1460]: Network configuration changed, trying to establish connection. Nov 5 14:58:51.748661 systemd-timesyncd[1460]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 5 14:58:51.748749 systemd-timesyncd[1460]: Initial clock synchronization to Wed 2025-11-05 14:58:51.391776 UTC. Nov 5 14:58:51.760100 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 5 14:58:51.762199 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 5 14:58:51.839716 ldconfig[1416]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 5 14:58:51.844646 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 5 14:58:51.857968 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 5 14:58:51.861889 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 5 14:58:51.880636 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 5 14:58:51.907052 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 5 14:58:51.909882 systemd[1]: Reached target sysinit.target - System Initialization. Nov 5 14:58:51.911127 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 5 14:58:51.912548 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 5 14:58:51.914180 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 5 14:58:51.915541 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 5 14:58:51.917164 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 5 14:58:51.918533 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 5 14:58:51.918570 systemd[1]: Reached target paths.target - Path Units. Nov 5 14:58:51.919603 systemd[1]: Reached target timers.target - Timer Units. Nov 5 14:58:51.921468 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 5 14:58:51.924096 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 5 14:58:51.927100 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 5 14:58:51.928778 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 5 14:58:51.930083 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 5 14:58:51.933606 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 5 14:58:51.935136 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 5 14:58:51.937188 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 5 14:58:51.938559 systemd[1]: Reached target sockets.target - Socket Units. Nov 5 14:58:51.939723 systemd[1]: Reached target basic.target - Basic System. Nov 5 14:58:51.940876 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 5 14:58:51.940910 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 5 14:58:51.941961 systemd[1]: Starting containerd.service - containerd container runtime... Nov 5 14:58:51.944188 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 5 14:58:51.946855 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 5 14:58:51.948990 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 5 14:58:51.951006 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 5 14:58:51.952140 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 5 14:58:51.954981 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 5 14:58:51.956319 jq[1529]: false Nov 5 14:58:51.957880 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 5 14:58:51.960869 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 5 14:58:51.963535 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 5 14:58:51.964569 extend-filesystems[1530]: Found /dev/vda6 Nov 5 14:58:51.968040 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 5 14:58:51.969390 extend-filesystems[1530]: Found /dev/vda9 Nov 5 14:58:51.969596 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 5 14:58:51.970073 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 5 14:58:51.971218 systemd[1]: Starting update-engine.service - Update Engine... Nov 5 14:58:51.973494 extend-filesystems[1530]: Checking size of /dev/vda9 Nov 5 14:58:51.975033 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 5 14:58:51.979727 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 5 14:58:51.981587 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 5 14:58:51.982532 jq[1548]: true Nov 5 14:58:51.981819 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 5 14:58:51.982090 systemd[1]: motdgen.service: Deactivated successfully. Nov 5 14:58:51.982246 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 5 14:58:51.985015 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 5 14:58:51.985175 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 5 14:58:51.990768 extend-filesystems[1530]: Resized partition /dev/vda9 Nov 5 14:58:51.995822 extend-filesystems[1567]: resize2fs 1.47.3 (8-Jul-2025) Nov 5 14:58:52.002869 kernel: EXT4-fs (vda9): resizing filesystem from 456704 to 1784827 blocks Nov 5 14:58:52.001173 (ntainerd)[1563]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 5 14:58:52.005055 jq[1560]: true Nov 5 14:58:52.018667 tar[1555]: linux-arm64/LICENSE Nov 5 14:58:52.018667 tar[1555]: linux-arm64/helm Nov 5 14:58:52.031222 update_engine[1546]: I20251105 14:58:52.023490 1546 main.cc:92] Flatcar Update Engine starting Nov 5 14:58:52.036717 kernel: EXT4-fs (vda9): resized filesystem to 1784827 Nov 5 14:58:52.055965 dbus-daemon[1527]: [system] SELinux support is enabled Nov 5 14:58:52.056826 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 5 14:58:52.059658 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 5 14:58:52.061852 extend-filesystems[1567]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 5 14:58:52.061852 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 5 14:58:52.061852 extend-filesystems[1567]: The filesystem on /dev/vda9 is now 1784827 (4k) blocks long. Nov 5 14:58:52.060084 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 5 14:58:52.072337 extend-filesystems[1530]: Resized filesystem in /dev/vda9 Nov 5 14:58:52.061980 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 5 14:58:52.078452 update_engine[1546]: I20251105 14:58:52.077461 1546 update_check_scheduler.cc:74] Next update check in 3m16s Nov 5 14:58:52.061994 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 5 14:58:52.066543 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 5 14:58:52.066787 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 5 14:58:52.077590 systemd[1]: Started update-engine.service - Update Engine. Nov 5 14:58:52.086325 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 5 14:58:52.090544 bash[1595]: Updated "/home/core/.ssh/authorized_keys" Nov 5 14:58:52.091901 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 5 14:58:52.093777 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 5 14:58:52.096625 systemd-logind[1544]: Watching system buttons on /dev/input/event0 (Power Button) Nov 5 14:58:52.100397 systemd-logind[1544]: New seat seat0. Nov 5 14:58:52.101790 systemd[1]: Started systemd-logind.service - User Login Management. Nov 5 14:58:52.168007 locksmithd[1596]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 5 14:58:52.197846 containerd[1563]: time="2025-11-05T14:58:52Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Nov 5 14:58:52.199560 containerd[1563]: time="2025-11-05T14:58:52.199520033Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Nov 5 14:58:52.208967 containerd[1563]: time="2025-11-05T14:58:52.208914731Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.248µs" Nov 5 14:58:52.208967 containerd[1563]: time="2025-11-05T14:58:52.208954705Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Nov 5 14:58:52.208967 containerd[1563]: time="2025-11-05T14:58:52.208980386Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Nov 5 14:58:52.209155 containerd[1563]: time="2025-11-05T14:58:52.209135657Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Nov 5 14:58:52.209182 containerd[1563]: time="2025-11-05T14:58:52.209160345Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Nov 5 14:58:52.209200 containerd[1563]: time="2025-11-05T14:58:52.209187440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 14:58:52.209249 containerd[1563]: time="2025-11-05T14:58:52.209232382Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Nov 5 14:58:52.209249 containerd[1563]: time="2025-11-05T14:58:52.209246483Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 14:58:52.209467 containerd[1563]: time="2025-11-05T14:58:52.209437716Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Nov 5 14:58:52.209467 containerd[1563]: time="2025-11-05T14:58:52.209464849Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 14:58:52.209514 containerd[1563]: time="2025-11-05T14:58:52.209475894Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Nov 5 14:58:52.209514 containerd[1563]: time="2025-11-05T14:58:52.209483422Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Nov 5 14:58:52.209718 containerd[1563]: time="2025-11-05T14:58:52.209694412Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Nov 5 14:58:52.210010 containerd[1563]: time="2025-11-05T14:58:52.209986458Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 14:58:52.210041 containerd[1563]: time="2025-11-05T14:58:52.210022420Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Nov 5 14:58:52.210041 containerd[1563]: time="2025-11-05T14:58:52.210033655Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Nov 5 14:58:52.210081 containerd[1563]: time="2025-11-05T14:58:52.210067247Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Nov 5 14:58:52.210337 containerd[1563]: time="2025-11-05T14:58:52.210318937Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Nov 5 14:58:52.210416 containerd[1563]: time="2025-11-05T14:58:52.210397624Z" level=info msg="metadata content store policy set" policy=shared Nov 5 14:58:52.226868 containerd[1563]: time="2025-11-05T14:58:52.226804371Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Nov 5 14:58:52.226984 containerd[1563]: time="2025-11-05T14:58:52.226888714Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Nov 5 14:58:52.226984 containerd[1563]: time="2025-11-05T14:58:52.226906026Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Nov 5 14:58:52.226984 containerd[1563]: time="2025-11-05T14:58:52.226928688Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Nov 5 14:58:52.227058 containerd[1563]: time="2025-11-05T14:58:52.226991056Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Nov 5 14:58:52.227058 containerd[1563]: time="2025-11-05T14:58:52.227005616Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Nov 5 14:58:52.227058 containerd[1563]: time="2025-11-05T14:58:52.227021170Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Nov 5 14:58:52.227058 containerd[1563]: time="2025-11-05T14:58:52.227035425Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Nov 5 14:58:52.227058 containerd[1563]: time="2025-11-05T14:58:52.227058125Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Nov 5 14:58:52.227134 containerd[1563]: time="2025-11-05T14:58:52.227070927Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Nov 5 14:58:52.227134 containerd[1563]: time="2025-11-05T14:58:52.227080061Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Nov 5 14:58:52.227134 containerd[1563]: time="2025-11-05T14:58:52.227092214Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Nov 5 14:58:52.227264 containerd[1563]: time="2025-11-05T14:58:52.227240109Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Nov 5 14:58:52.227292 containerd[1563]: time="2025-11-05T14:58:52.227276491Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Nov 5 14:58:52.227314 containerd[1563]: time="2025-11-05T14:58:52.227296172Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Nov 5 14:58:52.227314 containerd[1563]: time="2025-11-05T14:58:52.227307064Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Nov 5 14:58:52.227348 containerd[1563]: time="2025-11-05T14:58:52.227317955Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Nov 5 14:58:52.227348 containerd[1563]: time="2025-11-05T14:58:52.227328579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Nov 5 14:58:52.227379 containerd[1563]: time="2025-11-05T14:58:52.227346579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Nov 5 14:58:52.227379 containerd[1563]: time="2025-11-05T14:58:52.227357585Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Nov 5 14:58:52.227379 containerd[1563]: time="2025-11-05T14:58:52.227375203Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Nov 5 14:58:52.227433 containerd[1563]: time="2025-11-05T14:58:52.227385788Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Nov 5 14:58:52.227433 containerd[1563]: time="2025-11-05T14:58:52.227396871Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Nov 5 14:58:52.227622 containerd[1563]: time="2025-11-05T14:58:52.227602817Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Nov 5 14:58:52.227669 containerd[1563]: time="2025-11-05T14:58:52.227623874Z" level=info msg="Start snapshots syncer" Nov 5 14:58:52.227669 containerd[1563]: time="2025-11-05T14:58:52.227648829Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Nov 5 14:58:52.229753 containerd[1563]: time="2025-11-05T14:58:52.227946951Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Nov 5 14:58:52.229753 containerd[1563]: time="2025-11-05T14:58:52.228009816Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Nov 5 14:58:52.229898 containerd[1563]: time="2025-11-05T14:58:52.228092707Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Nov 5 14:58:52.229898 containerd[1563]: time="2025-11-05T14:58:52.228226806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Nov 5 14:58:52.229898 containerd[1563]: time="2025-11-05T14:58:52.228249163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Nov 5 14:58:52.229898 containerd[1563]: time="2025-11-05T14:58:52.228262309Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Nov 5 14:58:52.229898 containerd[1563]: time="2025-11-05T14:58:52.228279468Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Nov 5 14:58:52.229898 containerd[1563]: time="2025-11-05T14:58:52.228290627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Nov 5 14:58:52.229898 containerd[1563]: time="2025-11-05T14:58:52.228311569Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Nov 5 14:58:52.229898 containerd[1563]: time="2025-11-05T14:58:52.228322117Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Nov 5 14:58:52.229898 containerd[1563]: time="2025-11-05T14:58:52.228345237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Nov 5 14:58:52.229898 containerd[1563]: time="2025-11-05T14:58:52.228360868Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Nov 5 14:58:52.229898 containerd[1563]: time="2025-11-05T14:58:52.228379097Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Nov 5 14:58:52.229898 containerd[1563]: time="2025-11-05T14:58:52.228415440Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 14:58:52.229898 containerd[1563]: time="2025-11-05T14:58:52.228428854Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Nov 5 14:58:52.229898 containerd[1563]: time="2025-11-05T14:58:52.228437185Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 14:58:52.230122 containerd[1563]: time="2025-11-05T14:58:52.228455414Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Nov 5 14:58:52.230122 containerd[1563]: time="2025-11-05T14:58:52.228462866Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Nov 5 14:58:52.230122 containerd[1563]: time="2025-11-05T14:58:52.228473490Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Nov 5 14:58:52.230122 containerd[1563]: time="2025-11-05T14:58:52.228489999Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Nov 5 14:58:52.230122 containerd[1563]: time="2025-11-05T14:58:52.228585730Z" level=info msg="runtime interface created" Nov 5 14:58:52.230122 containerd[1563]: time="2025-11-05T14:58:52.228593641Z" level=info msg="created NRI interface" Nov 5 14:58:52.230122 containerd[1563]: time="2025-11-05T14:58:52.228601781Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Nov 5 14:58:52.230122 containerd[1563]: time="2025-11-05T14:58:52.228613360Z" level=info msg="Connect containerd service" Nov 5 14:58:52.230122 containerd[1563]: time="2025-11-05T14:58:52.228641334Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 5 14:58:52.230122 containerd[1563]: time="2025-11-05T14:58:52.229614159Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 14:58:52.301206 containerd[1563]: time="2025-11-05T14:58:52.301098084Z" level=info msg="Start subscribing containerd event" Nov 5 14:58:52.301413 containerd[1563]: time="2025-11-05T14:58:52.301392882Z" level=info msg="Start recovering state" Nov 5 14:58:52.301586 containerd[1563]: time="2025-11-05T14:58:52.301555529Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 5 14:58:52.301654 containerd[1563]: time="2025-11-05T14:58:52.301633375Z" level=info msg="Start event monitor" Nov 5 14:58:52.301736 containerd[1563]: time="2025-11-05T14:58:52.301723029Z" level=info msg="Start cni network conf syncer for default" Nov 5 14:58:52.301810 containerd[1563]: time="2025-11-05T14:58:52.301634254Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 5 14:58:52.301862 containerd[1563]: time="2025-11-05T14:58:52.301778175Z" level=info msg="Start streaming server" Nov 5 14:58:52.301862 containerd[1563]: time="2025-11-05T14:58:52.301837639Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Nov 5 14:58:52.301862 containerd[1563]: time="2025-11-05T14:58:52.301846391Z" level=info msg="runtime interface starting up..." Nov 5 14:58:52.301862 containerd[1563]: time="2025-11-05T14:58:52.301851091Z" level=info msg="starting plugins..." Nov 5 14:58:52.301925 containerd[1563]: time="2025-11-05T14:58:52.301869893Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Nov 5 14:58:52.303709 containerd[1563]: time="2025-11-05T14:58:52.303020460Z" level=info msg="containerd successfully booted in 0.105545s" Nov 5 14:58:52.303120 systemd[1]: Started containerd.service - containerd container runtime. Nov 5 14:58:52.344952 tar[1555]: linux-arm64/README.md Nov 5 14:58:52.361970 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 5 14:58:52.483881 sshd_keygen[1561]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 5 14:58:52.503013 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 5 14:58:52.506399 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 5 14:58:52.521675 systemd[1]: issuegen.service: Deactivated successfully. Nov 5 14:58:52.522114 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 5 14:58:52.525718 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 5 14:58:52.556735 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 5 14:58:52.559596 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 5 14:58:52.561975 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 5 14:58:52.563631 systemd[1]: Reached target getty.target - Login Prompts. Nov 5 14:58:52.939830 systemd-networkd[1476]: eth0: Gained IPv6LL Nov 5 14:58:52.942065 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 5 14:58:52.944048 systemd[1]: Reached target network-online.target - Network is Online. Nov 5 14:58:52.946601 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 5 14:58:52.949035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:58:52.960812 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 5 14:58:52.978423 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 5 14:58:52.979867 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 5 14:58:52.982244 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 5 14:58:52.994445 kernel: hrtimer: interrupt took 11957611 ns Nov 5 14:58:53.001147 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 5 14:58:53.519298 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:58:53.520922 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 5 14:58:53.523364 (kubelet)[1666]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 14:58:53.527118 systemd[1]: Startup finished in 1.174s (kernel) + 6.141s (initrd) + 3.351s (userspace) = 10.667s. Nov 5 14:58:53.877779 kubelet[1666]: E1105 14:58:53.877726 1666 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 14:58:53.879711 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 14:58:53.879830 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 14:58:53.880805 systemd[1]: kubelet.service: Consumed 770ms CPU time, 257.8M memory peak. Nov 5 14:58:55.157188 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 5 14:58:55.158733 systemd[1]: Started sshd@0-10.0.0.22:22-10.0.0.1:47668.service - OpenSSH per-connection server daemon (10.0.0.1:47668). Nov 5 14:58:55.231344 sshd[1679]: Accepted publickey for core from 10.0.0.1 port 47668 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:58:55.233268 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:55.240306 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 5 14:58:55.241145 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 5 14:58:55.246272 systemd-logind[1544]: New session 1 of user core. Nov 5 14:58:55.263244 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 5 14:58:55.266760 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 5 14:58:55.294745 (systemd)[1684]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 5 14:58:55.297703 systemd-logind[1544]: New session c1 of user core. Nov 5 14:58:55.410597 systemd[1684]: Queued start job for default target default.target. Nov 5 14:58:55.420625 systemd[1684]: Created slice app.slice - User Application Slice. Nov 5 14:58:55.420663 systemd[1684]: Reached target paths.target - Paths. Nov 5 14:58:55.420729 systemd[1684]: Reached target timers.target - Timers. Nov 5 14:58:55.422006 systemd[1684]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 5 14:58:55.432053 systemd[1684]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 5 14:58:55.432116 systemd[1684]: Reached target sockets.target - Sockets. Nov 5 14:58:55.432152 systemd[1684]: Reached target basic.target - Basic System. Nov 5 14:58:55.432178 systemd[1684]: Reached target default.target - Main User Target. Nov 5 14:58:55.432201 systemd[1684]: Startup finished in 128ms. Nov 5 14:58:55.432811 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 5 14:58:55.436897 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 5 14:58:55.505653 systemd[1]: Started sshd@1-10.0.0.22:22-10.0.0.1:47680.service - OpenSSH per-connection server daemon (10.0.0.1:47680). Nov 5 14:58:55.585592 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 47680 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:58:55.587566 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:55.593374 systemd-logind[1544]: New session 2 of user core. Nov 5 14:58:55.618824 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 5 14:58:55.676352 sshd[1698]: Connection closed by 10.0.0.1 port 47680 Nov 5 14:58:55.676762 sshd-session[1695]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:55.691509 systemd[1]: sshd@1-10.0.0.22:22-10.0.0.1:47680.service: Deactivated successfully. Nov 5 14:58:55.693736 systemd[1]: session-2.scope: Deactivated successfully. Nov 5 14:58:55.695063 systemd-logind[1544]: Session 2 logged out. Waiting for processes to exit. Nov 5 14:58:55.696389 systemd-logind[1544]: Removed session 2. Nov 5 14:58:55.699053 systemd[1]: Started sshd@2-10.0.0.22:22-10.0.0.1:47686.service - OpenSSH per-connection server daemon (10.0.0.1:47686). Nov 5 14:58:55.763530 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 47686 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:58:55.764773 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:55.768742 systemd-logind[1544]: New session 3 of user core. Nov 5 14:58:55.789932 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 5 14:58:55.837327 sshd[1707]: Connection closed by 10.0.0.1 port 47686 Nov 5 14:58:55.837730 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:55.850436 systemd[1]: sshd@2-10.0.0.22:22-10.0.0.1:47686.service: Deactivated successfully. Nov 5 14:58:55.854281 systemd[1]: session-3.scope: Deactivated successfully. Nov 5 14:58:55.859795 systemd-logind[1544]: Session 3 logged out. Waiting for processes to exit. Nov 5 14:58:55.866036 systemd[1]: Started sshd@3-10.0.0.22:22-10.0.0.1:47698.service - OpenSSH per-connection server daemon (10.0.0.1:47698). Nov 5 14:58:55.866681 systemd-logind[1544]: Removed session 3. Nov 5 14:58:55.926719 sshd[1713]: Accepted publickey for core from 10.0.0.1 port 47698 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:58:55.927945 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:55.932711 systemd-logind[1544]: New session 4 of user core. Nov 5 14:58:55.943899 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 5 14:58:55.993283 sshd[1716]: Connection closed by 10.0.0.1 port 47698 Nov 5 14:58:55.993566 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:56.003853 systemd[1]: sshd@3-10.0.0.22:22-10.0.0.1:47698.service: Deactivated successfully. Nov 5 14:58:56.007509 systemd[1]: session-4.scope: Deactivated successfully. Nov 5 14:58:56.008287 systemd-logind[1544]: Session 4 logged out. Waiting for processes to exit. Nov 5 14:58:56.011026 systemd[1]: Started sshd@4-10.0.0.22:22-10.0.0.1:47714.service - OpenSSH per-connection server daemon (10.0.0.1:47714). Nov 5 14:58:56.011822 systemd-logind[1544]: Removed session 4. Nov 5 14:58:56.065399 sshd[1722]: Accepted publickey for core from 10.0.0.1 port 47714 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:58:56.067439 sshd-session[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:56.076537 systemd-logind[1544]: New session 5 of user core. Nov 5 14:58:56.086858 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 5 14:58:56.147586 sudo[1727]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 5 14:58:56.147859 sudo[1727]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 14:58:56.165528 sudo[1727]: pam_unix(sudo:session): session closed for user root Nov 5 14:58:56.167259 sshd[1726]: Connection closed by 10.0.0.1 port 47714 Nov 5 14:58:56.167777 sshd-session[1722]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:56.177266 systemd[1]: sshd@4-10.0.0.22:22-10.0.0.1:47714.service: Deactivated successfully. Nov 5 14:58:56.179343 systemd[1]: session-5.scope: Deactivated successfully. Nov 5 14:58:56.180428 systemd-logind[1544]: Session 5 logged out. Waiting for processes to exit. Nov 5 14:58:56.182151 systemd-logind[1544]: Removed session 5. Nov 5 14:58:56.183736 systemd[1]: Started sshd@5-10.0.0.22:22-10.0.0.1:47724.service - OpenSSH per-connection server daemon (10.0.0.1:47724). Nov 5 14:58:56.242946 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 47724 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:58:56.245543 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:56.252445 systemd-logind[1544]: New session 6 of user core. Nov 5 14:58:56.257910 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 5 14:58:56.310226 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 5 14:58:56.310476 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 14:58:56.316909 sudo[1738]: pam_unix(sudo:session): session closed for user root Nov 5 14:58:56.322581 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 5 14:58:56.322843 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 14:58:56.331449 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 5 14:58:56.371476 augenrules[1760]: No rules Nov 5 14:58:56.372533 systemd[1]: audit-rules.service: Deactivated successfully. Nov 5 14:58:56.373775 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 5 14:58:56.374577 sudo[1737]: pam_unix(sudo:session): session closed for user root Nov 5 14:58:56.376047 sshd[1736]: Connection closed by 10.0.0.1 port 47724 Nov 5 14:58:56.376605 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Nov 5 14:58:56.389454 systemd[1]: sshd@5-10.0.0.22:22-10.0.0.1:47724.service: Deactivated successfully. Nov 5 14:58:56.391991 systemd[1]: session-6.scope: Deactivated successfully. Nov 5 14:58:56.393841 systemd-logind[1544]: Session 6 logged out. Waiting for processes to exit. Nov 5 14:58:56.396083 systemd[1]: Started sshd@6-10.0.0.22:22-10.0.0.1:47730.service - OpenSSH per-connection server daemon (10.0.0.1:47730). Nov 5 14:58:56.399234 systemd-logind[1544]: Removed session 6. Nov 5 14:58:56.472165 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 47730 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:58:56.472563 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:58:56.477645 systemd-logind[1544]: New session 7 of user core. Nov 5 14:58:56.489884 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 5 14:58:56.543184 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 5 14:58:56.543671 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 5 14:58:56.806956 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 5 14:58:56.824988 (dockerd)[1794]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 5 14:58:57.032768 dockerd[1794]: time="2025-11-05T14:58:57.032712898Z" level=info msg="Starting up" Nov 5 14:58:57.033522 dockerd[1794]: time="2025-11-05T14:58:57.033501475Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Nov 5 14:58:57.045648 dockerd[1794]: time="2025-11-05T14:58:57.045609792Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Nov 5 14:58:57.261248 dockerd[1794]: time="2025-11-05T14:58:57.261133145Z" level=info msg="Loading containers: start." Nov 5 14:58:57.271703 kernel: Initializing XFRM netlink socket Nov 5 14:58:57.464359 systemd-networkd[1476]: docker0: Link UP Nov 5 14:58:57.468003 dockerd[1794]: time="2025-11-05T14:58:57.467967134Z" level=info msg="Loading containers: done." Nov 5 14:58:57.479368 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2146275543-merged.mount: Deactivated successfully. Nov 5 14:58:57.482828 dockerd[1794]: time="2025-11-05T14:58:57.482785483Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 5 14:58:57.482937 dockerd[1794]: time="2025-11-05T14:58:57.482868539Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Nov 5 14:58:57.482963 dockerd[1794]: time="2025-11-05T14:58:57.482948663Z" level=info msg="Initializing buildkit" Nov 5 14:58:57.502914 dockerd[1794]: time="2025-11-05T14:58:57.502874045Z" level=info msg="Completed buildkit initialization" Nov 5 14:58:57.509639 dockerd[1794]: time="2025-11-05T14:58:57.509593644Z" level=info msg="Daemon has completed initialization" Nov 5 14:58:57.509798 dockerd[1794]: time="2025-11-05T14:58:57.509661183Z" level=info msg="API listen on /run/docker.sock" Nov 5 14:58:57.509943 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 5 14:58:58.089724 containerd[1563]: time="2025-11-05T14:58:58.089657193Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 5 14:58:58.608759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3922956951.mount: Deactivated successfully. Nov 5 14:58:59.482426 containerd[1563]: time="2025-11-05T14:58:59.482372479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:59.484160 containerd[1563]: time="2025-11-05T14:58:59.484127041Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Nov 5 14:58:59.485610 containerd[1563]: time="2025-11-05T14:58:59.485568818Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:59.488750 containerd[1563]: time="2025-11-05T14:58:59.488705657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:58:59.490125 containerd[1563]: time="2025-11-05T14:58:59.490079328Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.400311339s" Nov 5 14:58:59.490180 containerd[1563]: time="2025-11-05T14:58:59.490127077Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Nov 5 14:58:59.490782 containerd[1563]: time="2025-11-05T14:58:59.490743095Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 5 14:59:00.515499 containerd[1563]: time="2025-11-05T14:59:00.515446211Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:00.516065 containerd[1563]: time="2025-11-05T14:59:00.516044341Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Nov 5 14:59:00.516982 containerd[1563]: time="2025-11-05T14:59:00.516957883Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:00.519493 containerd[1563]: time="2025-11-05T14:59:00.519441586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:00.520696 containerd[1563]: time="2025-11-05T14:59:00.520457061Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.02968192s" Nov 5 14:59:00.520696 containerd[1563]: time="2025-11-05T14:59:00.520505271Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Nov 5 14:59:00.521088 containerd[1563]: time="2025-11-05T14:59:00.520910799Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 5 14:59:01.564364 containerd[1563]: time="2025-11-05T14:59:01.564309412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:01.564926 containerd[1563]: time="2025-11-05T14:59:01.564897933Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Nov 5 14:59:01.565719 containerd[1563]: time="2025-11-05T14:59:01.565681206Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:01.568206 containerd[1563]: time="2025-11-05T14:59:01.568167729Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:01.569289 containerd[1563]: time="2025-11-05T14:59:01.569267856Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.048326476s" Nov 5 14:59:01.569334 containerd[1563]: time="2025-11-05T14:59:01.569297651Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Nov 5 14:59:01.569698 containerd[1563]: time="2025-11-05T14:59:01.569667386Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 5 14:59:02.486413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3197980282.mount: Deactivated successfully. Nov 5 14:59:02.815933 containerd[1563]: time="2025-11-05T14:59:02.815815918Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:02.817038 containerd[1563]: time="2025-11-05T14:59:02.816512962Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Nov 5 14:59:02.817656 containerd[1563]: time="2025-11-05T14:59:02.817609147Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:02.820603 containerd[1563]: time="2025-11-05T14:59:02.820517971Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.250808532s" Nov 5 14:59:02.820603 containerd[1563]: time="2025-11-05T14:59:02.820554616Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Nov 5 14:59:02.820937 containerd[1563]: time="2025-11-05T14:59:02.820884341Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:02.821277 containerd[1563]: time="2025-11-05T14:59:02.821090652Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 5 14:59:03.307697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4172291782.mount: Deactivated successfully. Nov 5 14:59:03.940642 containerd[1563]: time="2025-11-05T14:59:03.940587297Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:03.941271 containerd[1563]: time="2025-11-05T14:59:03.941237829Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Nov 5 14:59:03.942539 containerd[1563]: time="2025-11-05T14:59:03.942488693Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:03.945153 containerd[1563]: time="2025-11-05T14:59:03.945113504Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:03.946821 containerd[1563]: time="2025-11-05T14:59:03.946781798Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.125659425s" Nov 5 14:59:03.946860 containerd[1563]: time="2025-11-05T14:59:03.946826930Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 5 14:59:03.947943 containerd[1563]: time="2025-11-05T14:59:03.947909461Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 5 14:59:04.105032 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 5 14:59:04.106390 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:59:04.270829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:59:04.275369 (kubelet)[2145]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 5 14:59:04.310967 kubelet[2145]: E1105 14:59:04.310896 2145 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 5 14:59:04.314043 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 5 14:59:04.314295 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 5 14:59:04.314662 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.3M memory peak. Nov 5 14:59:04.492779 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1460613921.mount: Deactivated successfully. Nov 5 14:59:04.497345 containerd[1563]: time="2025-11-05T14:59:04.497287532Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 14:59:04.498444 containerd[1563]: time="2025-11-05T14:59:04.498401870Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 5 14:59:04.499365 containerd[1563]: time="2025-11-05T14:59:04.499316103Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 14:59:04.501210 containerd[1563]: time="2025-11-05T14:59:04.501169938Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 5 14:59:04.501820 containerd[1563]: time="2025-11-05T14:59:04.501788252Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 553.847903ms" Nov 5 14:59:04.501820 containerd[1563]: time="2025-11-05T14:59:04.501815842Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 5 14:59:04.502247 containerd[1563]: time="2025-11-05T14:59:04.502217878Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 5 14:59:04.973041 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1827407703.mount: Deactivated successfully. Nov 5 14:59:06.364030 containerd[1563]: time="2025-11-05T14:59:06.363922856Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:06.366544 containerd[1563]: time="2025-11-05T14:59:06.366492103Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Nov 5 14:59:06.367830 containerd[1563]: time="2025-11-05T14:59:06.367790868Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:06.370547 containerd[1563]: time="2025-11-05T14:59:06.370509281Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:06.371742 containerd[1563]: time="2025-11-05T14:59:06.371704324Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 1.869436691s" Nov 5 14:59:06.371807 containerd[1563]: time="2025-11-05T14:59:06.371743056Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 5 14:59:11.332523 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:59:11.332663 systemd[1]: kubelet.service: Consumed 145ms CPU time, 107.3M memory peak. Nov 5 14:59:11.334559 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:59:11.355895 systemd[1]: Reload requested from client PID 2242 ('systemctl') (unit session-7.scope)... Nov 5 14:59:11.355911 systemd[1]: Reloading... Nov 5 14:59:11.430727 zram_generator::config[2289]: No configuration found. Nov 5 14:59:11.640279 systemd[1]: Reloading finished in 284 ms. Nov 5 14:59:11.703253 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 5 14:59:11.703353 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 5 14:59:11.703678 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:59:11.703738 systemd[1]: kubelet.service: Consumed 88ms CPU time, 95.1M memory peak. Nov 5 14:59:11.705450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:59:11.829238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:59:11.834409 (kubelet)[2331]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 14:59:11.870321 kubelet[2331]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 14:59:11.870321 kubelet[2331]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 14:59:11.870321 kubelet[2331]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 14:59:11.870660 kubelet[2331]: I1105 14:59:11.870395 2331 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 14:59:12.430110 kubelet[2331]: I1105 14:59:12.430062 2331 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 14:59:12.430110 kubelet[2331]: I1105 14:59:12.430095 2331 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 14:59:12.430395 kubelet[2331]: I1105 14:59:12.430377 2331 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 14:59:12.450670 kubelet[2331]: E1105 14:59:12.450639 2331 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.22:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Nov 5 14:59:12.451945 kubelet[2331]: I1105 14:59:12.451921 2331 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 14:59:12.459733 kubelet[2331]: I1105 14:59:12.458811 2331 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 14:59:12.461859 kubelet[2331]: I1105 14:59:12.461827 2331 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 14:59:12.463495 kubelet[2331]: I1105 14:59:12.463433 2331 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 14:59:12.463644 kubelet[2331]: I1105 14:59:12.463487 2331 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 14:59:12.463750 kubelet[2331]: I1105 14:59:12.463729 2331 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 14:59:12.463750 kubelet[2331]: I1105 14:59:12.463741 2331 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 14:59:12.463957 kubelet[2331]: I1105 14:59:12.463920 2331 state_mem.go:36] "Initialized new in-memory state store" Nov 5 14:59:12.467043 kubelet[2331]: I1105 14:59:12.466997 2331 kubelet.go:446] "Attempting to sync node with API server" Nov 5 14:59:12.467043 kubelet[2331]: I1105 14:59:12.467020 2331 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 14:59:12.467043 kubelet[2331]: I1105 14:59:12.467049 2331 kubelet.go:352] "Adding apiserver pod source" Nov 5 14:59:12.467186 kubelet[2331]: I1105 14:59:12.467061 2331 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 14:59:12.477771 kubelet[2331]: W1105 14:59:12.477673 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Nov 5 14:59:12.477888 kubelet[2331]: E1105 14:59:12.477782 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Nov 5 14:59:12.478950 kubelet[2331]: W1105 14:59:12.478908 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Nov 5 14:59:12.479077 kubelet[2331]: E1105 14:59:12.479058 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.22:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Nov 5 14:59:12.479507 kubelet[2331]: I1105 14:59:12.479490 2331 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 14:59:12.480197 kubelet[2331]: I1105 14:59:12.480180 2331 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 14:59:12.480400 kubelet[2331]: W1105 14:59:12.480388 2331 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 5 14:59:12.481459 kubelet[2331]: I1105 14:59:12.481433 2331 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 14:59:12.481581 kubelet[2331]: I1105 14:59:12.481569 2331 server.go:1287] "Started kubelet" Nov 5 14:59:12.481879 kubelet[2331]: I1105 14:59:12.481825 2331 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 14:59:12.482181 kubelet[2331]: I1105 14:59:12.482154 2331 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 14:59:12.482238 kubelet[2331]: I1105 14:59:12.482154 2331 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 14:59:12.483195 kubelet[2331]: I1105 14:59:12.483172 2331 server.go:479] "Adding debug handlers to kubelet server" Nov 5 14:59:12.483897 kubelet[2331]: I1105 14:59:12.483874 2331 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 14:59:12.484303 kubelet[2331]: I1105 14:59:12.484280 2331 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 14:59:12.485654 kubelet[2331]: E1105 14:59:12.485629 2331 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:12.485654 kubelet[2331]: I1105 14:59:12.485657 2331 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 14:59:12.485922 kubelet[2331]: I1105 14:59:12.485896 2331 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 14:59:12.486034 kubelet[2331]: I1105 14:59:12.486019 2331 reconciler.go:26] "Reconciler: start to sync state" Nov 5 14:59:12.486436 kubelet[2331]: W1105 14:59:12.486391 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Nov 5 14:59:12.486494 kubelet[2331]: E1105 14:59:12.486442 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.22:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Nov 5 14:59:12.486632 kubelet[2331]: E1105 14:59:12.486574 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="200ms" Nov 5 14:59:12.486844 kubelet[2331]: E1105 14:59:12.486416 2331 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.22:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.22:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1875244f5de22786 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-05 14:59:12.481535878 +0000 UTC m=+0.643214359,LastTimestamp:2025-11-05 14:59:12.481535878 +0000 UTC m=+0.643214359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 5 14:59:12.486916 kubelet[2331]: I1105 14:59:12.486882 2331 factory.go:221] Registration of the systemd container factory successfully Nov 5 14:59:12.486996 kubelet[2331]: I1105 14:59:12.486976 2331 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 14:59:12.487719 kubelet[2331]: E1105 14:59:12.487669 2331 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 5 14:59:12.487980 kubelet[2331]: I1105 14:59:12.487964 2331 factory.go:221] Registration of the containerd container factory successfully Nov 5 14:59:12.500210 kubelet[2331]: I1105 14:59:12.500180 2331 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 14:59:12.500340 kubelet[2331]: I1105 14:59:12.500327 2331 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 14:59:12.500450 kubelet[2331]: I1105 14:59:12.500384 2331 state_mem.go:36] "Initialized new in-memory state store" Nov 5 14:59:12.501614 kubelet[2331]: I1105 14:59:12.501582 2331 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 14:59:12.502745 kubelet[2331]: I1105 14:59:12.502726 2331 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 14:59:12.502855 kubelet[2331]: I1105 14:59:12.502842 2331 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 14:59:12.502938 kubelet[2331]: I1105 14:59:12.502927 2331 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 14:59:12.502983 kubelet[2331]: I1105 14:59:12.502976 2331 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 14:59:12.503075 kubelet[2331]: E1105 14:59:12.503059 2331 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 14:59:12.503823 kubelet[2331]: W1105 14:59:12.503796 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Nov 5 14:59:12.504128 kubelet[2331]: E1105 14:59:12.504074 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.22:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Nov 5 14:59:12.586797 kubelet[2331]: E1105 14:59:12.586760 2331 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:12.603205 kubelet[2331]: I1105 14:59:12.603181 2331 policy_none.go:49] "None policy: Start" Nov 5 14:59:12.603358 kubelet[2331]: I1105 14:59:12.603292 2331 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 14:59:12.603358 kubelet[2331]: I1105 14:59:12.603309 2331 state_mem.go:35] "Initializing new in-memory state store" Nov 5 14:59:12.603955 kubelet[2331]: E1105 14:59:12.603929 2331 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 5 14:59:12.610713 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 5 14:59:12.629733 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 5 14:59:12.633118 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 5 14:59:12.653923 kubelet[2331]: I1105 14:59:12.653851 2331 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 14:59:12.654086 kubelet[2331]: I1105 14:59:12.654066 2331 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 14:59:12.654132 kubelet[2331]: I1105 14:59:12.654086 2331 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 14:59:12.654448 kubelet[2331]: I1105 14:59:12.654412 2331 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 14:59:12.656182 kubelet[2331]: E1105 14:59:12.656125 2331 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 14:59:12.656182 kubelet[2331]: E1105 14:59:12.656165 2331 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 5 14:59:12.687486 kubelet[2331]: E1105 14:59:12.687202 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="400ms" Nov 5 14:59:12.755313 kubelet[2331]: I1105 14:59:12.755276 2331 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 14:59:12.755793 kubelet[2331]: E1105 14:59:12.755765 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Nov 5 14:59:12.813922 systemd[1]: Created slice kubepods-burstable-poded918bc93c6b9af8ca206c714e2c8efd.slice - libcontainer container kubepods-burstable-poded918bc93c6b9af8ca206c714e2c8efd.slice. Nov 5 14:59:12.837229 kubelet[2331]: E1105 14:59:12.837194 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:12.840381 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 5 14:59:12.842219 kubelet[2331]: E1105 14:59:12.842056 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:12.844279 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 5 14:59:12.845536 kubelet[2331]: E1105 14:59:12.845515 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:12.887536 kubelet[2331]: I1105 14:59:12.887472 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed918bc93c6b9af8ca206c714e2c8efd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed918bc93c6b9af8ca206c714e2c8efd\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:12.888171 kubelet[2331]: I1105 14:59:12.887516 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:12.888171 kubelet[2331]: I1105 14:59:12.887988 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:12.888171 kubelet[2331]: I1105 14:59:12.888011 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:12.888171 kubelet[2331]: I1105 14:59:12.888056 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:12.888171 kubelet[2331]: I1105 14:59:12.888074 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed918bc93c6b9af8ca206c714e2c8efd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed918bc93c6b9af8ca206c714e2c8efd\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:12.888359 kubelet[2331]: I1105 14:59:12.888088 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed918bc93c6b9af8ca206c714e2c8efd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ed918bc93c6b9af8ca206c714e2c8efd\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:12.888359 kubelet[2331]: I1105 14:59:12.888102 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:12.888359 kubelet[2331]: I1105 14:59:12.888145 2331 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 5 14:59:12.957955 kubelet[2331]: I1105 14:59:12.957814 2331 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 14:59:12.958454 kubelet[2331]: E1105 14:59:12.958425 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Nov 5 14:59:13.088206 kubelet[2331]: E1105 14:59:13.088168 2331 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.22:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.22:6443: connect: connection refused" interval="800ms" Nov 5 14:59:13.138455 kubelet[2331]: E1105 14:59:13.138423 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:13.139265 containerd[1563]: time="2025-11-05T14:59:13.139007106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ed918bc93c6b9af8ca206c714e2c8efd,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:13.142809 kubelet[2331]: E1105 14:59:13.142764 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:13.143279 containerd[1563]: time="2025-11-05T14:59:13.143239803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:13.146209 kubelet[2331]: E1105 14:59:13.146115 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:13.146556 containerd[1563]: time="2025-11-05T14:59:13.146446358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:13.251914 containerd[1563]: time="2025-11-05T14:59:13.250212928Z" level=info msg="connecting to shim 5c992bdfed292009637fb1080f7b2d9103bd2270dea7b026ebccb3c1ea63a278" address="unix:///run/containerd/s/e2edab8ee0560e7eaca35e2db6accae09d635cffd93e786ec9ac5d788d2ecdb1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:13.253557 containerd[1563]: time="2025-11-05T14:59:13.253506608Z" level=info msg="connecting to shim ba794f13c39c870af868b006c849c982ec5ecee2e0f2bc0617fb81b62bbddf17" address="unix:///run/containerd/s/0f6f562bd62826e426a3e6d5d5a25b8df72c80ea8a90b9c95ad3a8d92cec5bcc" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:13.265515 containerd[1563]: time="2025-11-05T14:59:13.265467295Z" level=info msg="connecting to shim f448f2baad9c38df7d89804abce6e5c24130a6a7e9102395f964b7edb9acefbf" address="unix:///run/containerd/s/5511b58cb77d0939eb118f05f4e6875bfdfe7324e74a00b6e7d0a09784828dd4" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:13.279859 systemd[1]: Started cri-containerd-ba794f13c39c870af868b006c849c982ec5ecee2e0f2bc0617fb81b62bbddf17.scope - libcontainer container ba794f13c39c870af868b006c849c982ec5ecee2e0f2bc0617fb81b62bbddf17. Nov 5 14:59:13.283954 systemd[1]: Started cri-containerd-5c992bdfed292009637fb1080f7b2d9103bd2270dea7b026ebccb3c1ea63a278.scope - libcontainer container 5c992bdfed292009637fb1080f7b2d9103bd2270dea7b026ebccb3c1ea63a278. Nov 5 14:59:13.289146 systemd[1]: Started cri-containerd-f448f2baad9c38df7d89804abce6e5c24130a6a7e9102395f964b7edb9acefbf.scope - libcontainer container f448f2baad9c38df7d89804abce6e5c24130a6a7e9102395f964b7edb9acefbf. Nov 5 14:59:13.323018 containerd[1563]: time="2025-11-05T14:59:13.322890934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c992bdfed292009637fb1080f7b2d9103bd2270dea7b026ebccb3c1ea63a278\"" Nov 5 14:59:13.323560 containerd[1563]: time="2025-11-05T14:59:13.323513967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ed918bc93c6b9af8ca206c714e2c8efd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba794f13c39c870af868b006c849c982ec5ecee2e0f2bc0617fb81b62bbddf17\"" Nov 5 14:59:13.324096 kubelet[2331]: E1105 14:59:13.324070 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:13.324630 kubelet[2331]: E1105 14:59:13.324603 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:13.327474 containerd[1563]: time="2025-11-05T14:59:13.327009979Z" level=info msg="CreateContainer within sandbox \"ba794f13c39c870af868b006c849c982ec5ecee2e0f2bc0617fb81b62bbddf17\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 5 14:59:13.327705 containerd[1563]: time="2025-11-05T14:59:13.327658462Z" level=info msg="CreateContainer within sandbox \"5c992bdfed292009637fb1080f7b2d9103bd2270dea7b026ebccb3c1ea63a278\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 5 14:59:13.338016 containerd[1563]: time="2025-11-05T14:59:13.337974125Z" level=info msg="Container 0ef7d934fce932ef124d8a8873d4c6164eacbf297a7c445b30ef40994ffff885: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:13.338636 containerd[1563]: time="2025-11-05T14:59:13.338613274Z" level=info msg="Container b3dd760f81b05069cc8c91adae97e94aacae4d4aaa9d3b6ac0cad918e01a01b9: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:13.339202 containerd[1563]: time="2025-11-05T14:59:13.339166575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"f448f2baad9c38df7d89804abce6e5c24130a6a7e9102395f964b7edb9acefbf\"" Nov 5 14:59:13.340158 kubelet[2331]: E1105 14:59:13.340134 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:13.342471 containerd[1563]: time="2025-11-05T14:59:13.342437716Z" level=info msg="CreateContainer within sandbox \"f448f2baad9c38df7d89804abce6e5c24130a6a7e9102395f964b7edb9acefbf\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 5 14:59:13.348434 containerd[1563]: time="2025-11-05T14:59:13.348316315Z" level=info msg="CreateContainer within sandbox \"5c992bdfed292009637fb1080f7b2d9103bd2270dea7b026ebccb3c1ea63a278\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b3dd760f81b05069cc8c91adae97e94aacae4d4aaa9d3b6ac0cad918e01a01b9\"" Nov 5 14:59:13.348948 containerd[1563]: time="2025-11-05T14:59:13.348921237Z" level=info msg="StartContainer for \"b3dd760f81b05069cc8c91adae97e94aacae4d4aaa9d3b6ac0cad918e01a01b9\"" Nov 5 14:59:13.350001 containerd[1563]: time="2025-11-05T14:59:13.349975900Z" level=info msg="connecting to shim b3dd760f81b05069cc8c91adae97e94aacae4d4aaa9d3b6ac0cad918e01a01b9" address="unix:///run/containerd/s/e2edab8ee0560e7eaca35e2db6accae09d635cffd93e786ec9ac5d788d2ecdb1" protocol=ttrpc version=3 Nov 5 14:59:13.359072 containerd[1563]: time="2025-11-05T14:59:13.359034527Z" level=info msg="Container 4070a3eca3d14e1099d1c1971900d23ce6f23a5e101da220936aedc877fb485d: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:13.359163 containerd[1563]: time="2025-11-05T14:59:13.359131345Z" level=info msg="CreateContainer within sandbox \"ba794f13c39c870af868b006c849c982ec5ecee2e0f2bc0617fb81b62bbddf17\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"0ef7d934fce932ef124d8a8873d4c6164eacbf297a7c445b30ef40994ffff885\"" Nov 5 14:59:13.359819 containerd[1563]: time="2025-11-05T14:59:13.359793072Z" level=info msg="StartContainer for \"0ef7d934fce932ef124d8a8873d4c6164eacbf297a7c445b30ef40994ffff885\"" Nov 5 14:59:13.360623 kubelet[2331]: I1105 14:59:13.360351 2331 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 14:59:13.360863 kubelet[2331]: E1105 14:59:13.360752 2331 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.22:6443/api/v1/nodes\": dial tcp 10.0.0.22:6443: connect: connection refused" node="localhost" Nov 5 14:59:13.361921 containerd[1563]: time="2025-11-05T14:59:13.361876829Z" level=info msg="connecting to shim 0ef7d934fce932ef124d8a8873d4c6164eacbf297a7c445b30ef40994ffff885" address="unix:///run/containerd/s/0f6f562bd62826e426a3e6d5d5a25b8df72c80ea8a90b9c95ad3a8d92cec5bcc" protocol=ttrpc version=3 Nov 5 14:59:13.368206 containerd[1563]: time="2025-11-05T14:59:13.368162406Z" level=info msg="CreateContainer within sandbox \"f448f2baad9c38df7d89804abce6e5c24130a6a7e9102395f964b7edb9acefbf\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4070a3eca3d14e1099d1c1971900d23ce6f23a5e101da220936aedc877fb485d\"" Nov 5 14:59:13.368918 containerd[1563]: time="2025-11-05T14:59:13.368818469Z" level=info msg="StartContainer for \"4070a3eca3d14e1099d1c1971900d23ce6f23a5e101da220936aedc877fb485d\"" Nov 5 14:59:13.370050 containerd[1563]: time="2025-11-05T14:59:13.370023206Z" level=info msg="connecting to shim 4070a3eca3d14e1099d1c1971900d23ce6f23a5e101da220936aedc877fb485d" address="unix:///run/containerd/s/5511b58cb77d0939eb118f05f4e6875bfdfe7324e74a00b6e7d0a09784828dd4" protocol=ttrpc version=3 Nov 5 14:59:13.376313 kubelet[2331]: W1105 14:59:13.376219 2331 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.22:6443: connect: connection refused Nov 5 14:59:13.376521 kubelet[2331]: E1105 14:59:13.376475 2331 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.22:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.22:6443: connect: connection refused" logger="UnhandledError" Nov 5 14:59:13.377842 systemd[1]: Started cri-containerd-b3dd760f81b05069cc8c91adae97e94aacae4d4aaa9d3b6ac0cad918e01a01b9.scope - libcontainer container b3dd760f81b05069cc8c91adae97e94aacae4d4aaa9d3b6ac0cad918e01a01b9. Nov 5 14:59:13.393864 systemd[1]: Started cri-containerd-0ef7d934fce932ef124d8a8873d4c6164eacbf297a7c445b30ef40994ffff885.scope - libcontainer container 0ef7d934fce932ef124d8a8873d4c6164eacbf297a7c445b30ef40994ffff885. Nov 5 14:59:13.395076 systemd[1]: Started cri-containerd-4070a3eca3d14e1099d1c1971900d23ce6f23a5e101da220936aedc877fb485d.scope - libcontainer container 4070a3eca3d14e1099d1c1971900d23ce6f23a5e101da220936aedc877fb485d. Nov 5 14:59:13.436741 containerd[1563]: time="2025-11-05T14:59:13.436701781Z" level=info msg="StartContainer for \"b3dd760f81b05069cc8c91adae97e94aacae4d4aaa9d3b6ac0cad918e01a01b9\" returns successfully" Nov 5 14:59:13.444866 containerd[1563]: time="2025-11-05T14:59:13.444734785Z" level=info msg="StartContainer for \"4070a3eca3d14e1099d1c1971900d23ce6f23a5e101da220936aedc877fb485d\" returns successfully" Nov 5 14:59:13.449102 containerd[1563]: time="2025-11-05T14:59:13.448989023Z" level=info msg="StartContainer for \"0ef7d934fce932ef124d8a8873d4c6164eacbf297a7c445b30ef40994ffff885\" returns successfully" Nov 5 14:59:13.513771 kubelet[2331]: E1105 14:59:13.512875 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:13.513771 kubelet[2331]: E1105 14:59:13.513012 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:13.516644 kubelet[2331]: E1105 14:59:13.516623 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:13.516991 kubelet[2331]: E1105 14:59:13.516977 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:13.517047 kubelet[2331]: E1105 14:59:13.517021 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:13.517350 kubelet[2331]: E1105 14:59:13.517334 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:14.165918 kubelet[2331]: I1105 14:59:14.165883 2331 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 14:59:14.518614 kubelet[2331]: E1105 14:59:14.518356 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:14.518614 kubelet[2331]: E1105 14:59:14.518429 2331 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 5 14:59:14.518614 kubelet[2331]: E1105 14:59:14.518478 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:14.518614 kubelet[2331]: E1105 14:59:14.518607 2331 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:14.779325 kubelet[2331]: E1105 14:59:14.779228 2331 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 5 14:59:14.846899 kubelet[2331]: I1105 14:59:14.846864 2331 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 14:59:14.846899 kubelet[2331]: E1105 14:59:14.846899 2331 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 5 14:59:14.871744 kubelet[2331]: E1105 14:59:14.871681 2331 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:14.972295 kubelet[2331]: E1105 14:59:14.972259 2331 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:15.073264 kubelet[2331]: E1105 14:59:15.073142 2331 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:15.174001 kubelet[2331]: E1105 14:59:15.173949 2331 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:15.274494 kubelet[2331]: E1105 14:59:15.274442 2331 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:15.374670 kubelet[2331]: E1105 14:59:15.374625 2331 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:15.478509 kubelet[2331]: I1105 14:59:15.478485 2331 apiserver.go:52] "Watching apiserver" Nov 5 14:59:15.486744 kubelet[2331]: I1105 14:59:15.486720 2331 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 14:59:15.486822 kubelet[2331]: I1105 14:59:15.486724 2331 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 14:59:15.492806 kubelet[2331]: E1105 14:59:15.492781 2331 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 5 14:59:15.492806 kubelet[2331]: I1105 14:59:15.492806 2331 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:15.494330 kubelet[2331]: E1105 14:59:15.494309 2331 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:15.494330 kubelet[2331]: I1105 14:59:15.494330 2331 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:15.495718 kubelet[2331]: E1105 14:59:15.495699 2331 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:16.777834 systemd[1]: Reload requested from client PID 2605 ('systemctl') (unit session-7.scope)... Nov 5 14:59:16.777853 systemd[1]: Reloading... Nov 5 14:59:16.855719 zram_generator::config[2648]: No configuration found. Nov 5 14:59:17.112441 systemd[1]: Reloading finished in 334 ms. Nov 5 14:59:17.136244 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:59:17.157561 systemd[1]: kubelet.service: Deactivated successfully. Nov 5 14:59:17.157810 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:59:17.157871 systemd[1]: kubelet.service: Consumed 1.024s CPU time, 129.2M memory peak. Nov 5 14:59:17.159618 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 5 14:59:17.309797 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 5 14:59:17.313332 (kubelet)[2692]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 5 14:59:17.357717 kubelet[2692]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 14:59:17.357717 kubelet[2692]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 5 14:59:17.357717 kubelet[2692]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 5 14:59:17.357717 kubelet[2692]: I1105 14:59:17.357005 2692 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 5 14:59:17.363443 kubelet[2692]: I1105 14:59:17.363355 2692 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 5 14:59:17.363443 kubelet[2692]: I1105 14:59:17.363382 2692 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 5 14:59:17.363639 kubelet[2692]: I1105 14:59:17.363623 2692 server.go:954] "Client rotation is on, will bootstrap in background" Nov 5 14:59:17.365162 kubelet[2692]: I1105 14:59:17.365138 2692 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 5 14:59:17.367303 kubelet[2692]: I1105 14:59:17.367265 2692 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 5 14:59:17.371978 kubelet[2692]: I1105 14:59:17.371954 2692 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Nov 5 14:59:17.374919 kubelet[2692]: I1105 14:59:17.374578 2692 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 5 14:59:17.374919 kubelet[2692]: I1105 14:59:17.374781 2692 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 5 14:59:17.375172 kubelet[2692]: I1105 14:59:17.374804 2692 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 5 14:59:17.375270 kubelet[2692]: I1105 14:59:17.375182 2692 topology_manager.go:138] "Creating topology manager with none policy" Nov 5 14:59:17.375270 kubelet[2692]: I1105 14:59:17.375194 2692 container_manager_linux.go:304] "Creating device plugin manager" Nov 5 14:59:17.375270 kubelet[2692]: I1105 14:59:17.375245 2692 state_mem.go:36] "Initialized new in-memory state store" Nov 5 14:59:17.375513 kubelet[2692]: I1105 14:59:17.375466 2692 kubelet.go:446] "Attempting to sync node with API server" Nov 5 14:59:17.375513 kubelet[2692]: I1105 14:59:17.375501 2692 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 5 14:59:17.375574 kubelet[2692]: I1105 14:59:17.375531 2692 kubelet.go:352] "Adding apiserver pod source" Nov 5 14:59:17.375574 kubelet[2692]: I1105 14:59:17.375544 2692 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 5 14:59:17.376739 kubelet[2692]: I1105 14:59:17.376719 2692 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Nov 5 14:59:17.377454 kubelet[2692]: I1105 14:59:17.377426 2692 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 5 14:59:17.378098 kubelet[2692]: I1105 14:59:17.378084 2692 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 5 14:59:17.378217 kubelet[2692]: I1105 14:59:17.378206 2692 server.go:1287] "Started kubelet" Nov 5 14:59:17.378829 kubelet[2692]: I1105 14:59:17.378773 2692 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 5 14:59:17.379407 kubelet[2692]: I1105 14:59:17.378812 2692 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 5 14:59:17.380885 kubelet[2692]: I1105 14:59:17.380853 2692 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 5 14:59:17.381455 kubelet[2692]: I1105 14:59:17.381438 2692 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 5 14:59:17.386832 kubelet[2692]: I1105 14:59:17.386805 2692 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 5 14:59:17.387826 kubelet[2692]: I1105 14:59:17.387801 2692 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 5 14:59:17.387897 kubelet[2692]: E1105 14:59:17.387882 2692 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 5 14:59:17.388438 kubelet[2692]: I1105 14:59:17.388407 2692 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 5 14:59:17.388563 kubelet[2692]: I1105 14:59:17.388549 2692 reconciler.go:26] "Reconciler: start to sync state" Nov 5 14:59:17.389024 kubelet[2692]: I1105 14:59:17.388998 2692 server.go:479] "Adding debug handlers to kubelet server" Nov 5 14:59:17.394019 kubelet[2692]: I1105 14:59:17.393841 2692 factory.go:221] Registration of the systemd container factory successfully Nov 5 14:59:17.394019 kubelet[2692]: I1105 14:59:17.393939 2692 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 5 14:59:17.395539 kubelet[2692]: I1105 14:59:17.395519 2692 factory.go:221] Registration of the containerd container factory successfully Nov 5 14:59:17.411084 kubelet[2692]: I1105 14:59:17.410849 2692 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 5 14:59:17.412619 kubelet[2692]: I1105 14:59:17.412582 2692 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 5 14:59:17.413056 kubelet[2692]: I1105 14:59:17.412876 2692 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 5 14:59:17.413056 kubelet[2692]: I1105 14:59:17.412921 2692 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 5 14:59:17.413056 kubelet[2692]: I1105 14:59:17.412930 2692 kubelet.go:2382] "Starting kubelet main sync loop" Nov 5 14:59:17.413227 kubelet[2692]: E1105 14:59:17.412974 2692 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 5 14:59:17.432319 kubelet[2692]: I1105 14:59:17.432285 2692 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 5 14:59:17.432319 kubelet[2692]: I1105 14:59:17.432302 2692 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 5 14:59:17.432319 kubelet[2692]: I1105 14:59:17.432320 2692 state_mem.go:36] "Initialized new in-memory state store" Nov 5 14:59:17.432465 kubelet[2692]: I1105 14:59:17.432447 2692 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 5 14:59:17.432496 kubelet[2692]: I1105 14:59:17.432463 2692 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 5 14:59:17.432496 kubelet[2692]: I1105 14:59:17.432481 2692 policy_none.go:49] "None policy: Start" Nov 5 14:59:17.432496 kubelet[2692]: I1105 14:59:17.432491 2692 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 5 14:59:17.432553 kubelet[2692]: I1105 14:59:17.432500 2692 state_mem.go:35] "Initializing new in-memory state store" Nov 5 14:59:17.432599 kubelet[2692]: I1105 14:59:17.432589 2692 state_mem.go:75] "Updated machine memory state" Nov 5 14:59:17.435972 kubelet[2692]: I1105 14:59:17.435950 2692 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 5 14:59:17.436338 kubelet[2692]: I1105 14:59:17.436091 2692 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 5 14:59:17.436338 kubelet[2692]: I1105 14:59:17.436103 2692 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 5 14:59:17.436338 kubelet[2692]: I1105 14:59:17.436318 2692 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 5 14:59:17.437044 kubelet[2692]: E1105 14:59:17.437022 2692 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 5 14:59:17.514292 kubelet[2692]: I1105 14:59:17.514178 2692 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:17.514292 kubelet[2692]: I1105 14:59:17.514290 2692 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:17.514447 kubelet[2692]: I1105 14:59:17.514404 2692 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 5 14:59:17.540820 kubelet[2692]: I1105 14:59:17.540787 2692 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 5 14:59:17.546789 kubelet[2692]: I1105 14:59:17.546766 2692 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 5 14:59:17.546872 kubelet[2692]: I1105 14:59:17.546831 2692 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 5 14:59:17.689504 kubelet[2692]: I1105 14:59:17.689278 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ed918bc93c6b9af8ca206c714e2c8efd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed918bc93c6b9af8ca206c714e2c8efd\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:17.689504 kubelet[2692]: I1105 14:59:17.689327 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:17.689504 kubelet[2692]: I1105 14:59:17.689347 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 5 14:59:17.689504 kubelet[2692]: I1105 14:59:17.689362 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:17.689504 kubelet[2692]: I1105 14:59:17.689381 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:17.689703 kubelet[2692]: I1105 14:59:17.689395 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ed918bc93c6b9af8ca206c714e2c8efd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ed918bc93c6b9af8ca206c714e2c8efd\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:17.689703 kubelet[2692]: I1105 14:59:17.689422 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ed918bc93c6b9af8ca206c714e2c8efd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ed918bc93c6b9af8ca206c714e2c8efd\") " pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:17.689703 kubelet[2692]: I1105 14:59:17.689437 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:17.689703 kubelet[2692]: I1105 14:59:17.689478 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:17.777017 sudo[2732]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 5 14:59:17.777267 sudo[2732]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 5 14:59:17.818940 kubelet[2692]: E1105 14:59:17.818904 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:17.819893 kubelet[2692]: E1105 14:59:17.819867 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:17.820001 kubelet[2692]: E1105 14:59:17.819977 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:18.092818 sudo[2732]: pam_unix(sudo:session): session closed for user root Nov 5 14:59:18.376841 kubelet[2692]: I1105 14:59:18.376809 2692 apiserver.go:52] "Watching apiserver" Nov 5 14:59:18.389125 kubelet[2692]: I1105 14:59:18.389101 2692 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 5 14:59:18.424847 kubelet[2692]: I1105 14:59:18.424817 2692 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:18.425148 kubelet[2692]: I1105 14:59:18.425130 2692 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:18.425932 kubelet[2692]: E1105 14:59:18.425910 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:18.430811 kubelet[2692]: E1105 14:59:18.430781 2692 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Nov 5 14:59:18.431234 kubelet[2692]: E1105 14:59:18.431204 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:18.431804 kubelet[2692]: E1105 14:59:18.431608 2692 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 5 14:59:18.431804 kubelet[2692]: E1105 14:59:18.431748 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:18.473437 kubelet[2692]: I1105 14:59:18.473382 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.473366119 podStartE2EDuration="1.473366119s" podCreationTimestamp="2025-11-05 14:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:59:18.462018182 +0000 UTC m=+1.144362073" watchObservedRunningTime="2025-11-05 14:59:18.473366119 +0000 UTC m=+1.155710010" Nov 5 14:59:18.487704 kubelet[2692]: I1105 14:59:18.487640 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.487622222 podStartE2EDuration="1.487622222s" podCreationTimestamp="2025-11-05 14:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:59:18.474599568 +0000 UTC m=+1.156943459" watchObservedRunningTime="2025-11-05 14:59:18.487622222 +0000 UTC m=+1.169966113" Nov 5 14:59:19.427708 kubelet[2692]: E1105 14:59:19.427553 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:19.427708 kubelet[2692]: E1105 14:59:19.427619 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:19.428267 kubelet[2692]: E1105 14:59:19.428251 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:19.481409 sudo[1773]: pam_unix(sudo:session): session closed for user root Nov 5 14:59:19.482826 sshd[1772]: Connection closed by 10.0.0.1 port 47730 Nov 5 14:59:19.483262 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Nov 5 14:59:19.487569 systemd[1]: sshd@6-10.0.0.22:22-10.0.0.1:47730.service: Deactivated successfully. Nov 5 14:59:19.489568 systemd[1]: session-7.scope: Deactivated successfully. Nov 5 14:59:19.489793 systemd[1]: session-7.scope: Consumed 6.535s CPU time, 252.6M memory peak. Nov 5 14:59:19.491111 systemd-logind[1544]: Session 7 logged out. Waiting for processes to exit. Nov 5 14:59:19.493403 systemd-logind[1544]: Removed session 7. Nov 5 14:59:21.001304 kubelet[2692]: E1105 14:59:21.001274 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:22.159615 kubelet[2692]: I1105 14:59:22.159568 2692 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 5 14:59:22.165591 containerd[1563]: time="2025-11-05T14:59:22.165552745Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 5 14:59:22.165983 kubelet[2692]: I1105 14:59:22.165781 2692 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 5 14:59:23.246824 kubelet[2692]: I1105 14:59:23.246348 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=6.246310169 podStartE2EDuration="6.246310169s" podCreationTimestamp="2025-11-05 14:59:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:59:18.48795484 +0000 UTC m=+1.170298731" watchObservedRunningTime="2025-11-05 14:59:23.246310169 +0000 UTC m=+5.928654060" Nov 5 14:59:23.260852 systemd[1]: Created slice kubepods-besteffort-pod5227784f_b459_4103_84a0_6a5c2cccf8ba.slice - libcontainer container kubepods-besteffort-pod5227784f_b459_4103_84a0_6a5c2cccf8ba.slice. Nov 5 14:59:23.296375 systemd[1]: Created slice kubepods-burstable-poddb52dd67_f4bc_4c78_bcc3_795a86775434.slice - libcontainer container kubepods-burstable-poddb52dd67_f4bc_4c78_bcc3_795a86775434.slice. Nov 5 14:59:23.302871 systemd[1]: Created slice kubepods-besteffort-pod0ae31ee7_9542_4fac_9bc4_569bb9f3010f.slice - libcontainer container kubepods-besteffort-pod0ae31ee7_9542_4fac_9bc4_569bb9f3010f.slice. Nov 5 14:59:23.425518 kubelet[2692]: I1105 14:59:23.425481 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5227784f-b459-4103-84a0-6a5c2cccf8ba-xtables-lock\") pod \"kube-proxy-2kb7j\" (UID: \"5227784f-b459-4103-84a0-6a5c2cccf8ba\") " pod="kube-system/kube-proxy-2kb7j" Nov 5 14:59:23.425518 kubelet[2692]: I1105 14:59:23.425520 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-etc-cni-netd\") pod \"cilium-qvpr4\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " pod="kube-system/cilium-qvpr4" Nov 5 14:59:23.425673 kubelet[2692]: I1105 14:59:23.425543 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mczvv\" (UniqueName: \"kubernetes.io/projected/5227784f-b459-4103-84a0-6a5c2cccf8ba-kube-api-access-mczvv\") pod \"kube-proxy-2kb7j\" (UID: \"5227784f-b459-4103-84a0-6a5c2cccf8ba\") " pod="kube-system/kube-proxy-2kb7j" Nov 5 14:59:23.425673 kubelet[2692]: I1105 14:59:23.425560 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-bpf-maps\") pod \"cilium-qvpr4\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " pod="kube-system/cilium-qvpr4" Nov 5 14:59:23.425673 kubelet[2692]: I1105 14:59:23.425576 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-cni-path\") pod \"cilium-qvpr4\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " pod="kube-system/cilium-qvpr4" Nov 5 14:59:23.425673 kubelet[2692]: I1105 14:59:23.425590 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-lib-modules\") pod \"cilium-qvpr4\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " pod="kube-system/cilium-qvpr4" Nov 5 14:59:23.425673 kubelet[2692]: I1105 14:59:23.425606 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-xtables-lock\") pod \"cilium-qvpr4\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " pod="kube-system/cilium-qvpr4" Nov 5 14:59:23.425673 kubelet[2692]: I1105 14:59:23.425619 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db52dd67-f4bc-4c78-bcc3-795a86775434-cilium-config-path\") pod \"cilium-qvpr4\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " pod="kube-system/cilium-qvpr4" Nov 5 14:59:23.426049 kubelet[2692]: I1105 14:59:23.425633 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlnt8\" (UniqueName: \"kubernetes.io/projected/db52dd67-f4bc-4c78-bcc3-795a86775434-kube-api-access-hlnt8\") pod \"cilium-qvpr4\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " pod="kube-system/cilium-qvpr4" Nov 5 14:59:23.426049 kubelet[2692]: I1105 14:59:23.425647 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5227784f-b459-4103-84a0-6a5c2cccf8ba-lib-modules\") pod \"kube-proxy-2kb7j\" (UID: \"5227784f-b459-4103-84a0-6a5c2cccf8ba\") " pod="kube-system/kube-proxy-2kb7j" Nov 5 14:59:23.426049 kubelet[2692]: I1105 14:59:23.425662 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0ae31ee7-9542-4fac-9bc4-569bb9f3010f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-cb9tm\" (UID: \"0ae31ee7-9542-4fac-9bc4-569bb9f3010f\") " pod="kube-system/cilium-operator-6c4d7847fc-cb9tm" Nov 5 14:59:23.426049 kubelet[2692]: I1105 14:59:23.425679 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-host-proc-sys-net\") pod \"cilium-qvpr4\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " pod="kube-system/cilium-qvpr4" Nov 5 14:59:23.426049 kubelet[2692]: I1105 14:59:23.425722 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-cilium-run\") pod \"cilium-qvpr4\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " pod="kube-system/cilium-qvpr4" Nov 5 14:59:23.426152 kubelet[2692]: I1105 14:59:23.425738 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-host-proc-sys-kernel\") pod \"cilium-qvpr4\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " pod="kube-system/cilium-qvpr4" Nov 5 14:59:23.426152 kubelet[2692]: I1105 14:59:23.425752 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/db52dd67-f4bc-4c78-bcc3-795a86775434-hubble-tls\") pod \"cilium-qvpr4\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " pod="kube-system/cilium-qvpr4" Nov 5 14:59:23.426152 kubelet[2692]: I1105 14:59:23.425767 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5227784f-b459-4103-84a0-6a5c2cccf8ba-kube-proxy\") pod \"kube-proxy-2kb7j\" (UID: \"5227784f-b459-4103-84a0-6a5c2cccf8ba\") " pod="kube-system/kube-proxy-2kb7j" Nov 5 14:59:23.426152 kubelet[2692]: I1105 14:59:23.425783 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-hostproc\") pod \"cilium-qvpr4\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " pod="kube-system/cilium-qvpr4" Nov 5 14:59:23.426152 kubelet[2692]: I1105 14:59:23.425801 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-cilium-cgroup\") pod \"cilium-qvpr4\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " pod="kube-system/cilium-qvpr4" Nov 5 14:59:23.426152 kubelet[2692]: I1105 14:59:23.425816 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/db52dd67-f4bc-4c78-bcc3-795a86775434-clustermesh-secrets\") pod \"cilium-qvpr4\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " pod="kube-system/cilium-qvpr4" Nov 5 14:59:23.426270 kubelet[2692]: I1105 14:59:23.425833 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpbd9\" (UniqueName: \"kubernetes.io/projected/0ae31ee7-9542-4fac-9bc4-569bb9f3010f-kube-api-access-mpbd9\") pod \"cilium-operator-6c4d7847fc-cb9tm\" (UID: \"0ae31ee7-9542-4fac-9bc4-569bb9f3010f\") " pod="kube-system/cilium-operator-6c4d7847fc-cb9tm" Nov 5 14:59:23.594058 kubelet[2692]: E1105 14:59:23.593391 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:23.594682 containerd[1563]: time="2025-11-05T14:59:23.594599472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2kb7j,Uid:5227784f-b459-4103-84a0-6a5c2cccf8ba,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:23.601557 kubelet[2692]: E1105 14:59:23.601307 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:23.603160 containerd[1563]: time="2025-11-05T14:59:23.603104185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qvpr4,Uid:db52dd67-f4bc-4c78-bcc3-795a86775434,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:23.607445 kubelet[2692]: E1105 14:59:23.607420 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:23.607893 containerd[1563]: time="2025-11-05T14:59:23.607850130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cb9tm,Uid:0ae31ee7-9542-4fac-9bc4-569bb9f3010f,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:23.617378 containerd[1563]: time="2025-11-05T14:59:23.617333826Z" level=info msg="connecting to shim 2916eeebed2ddb3b79fbd7227f6b567c79d86583ee4bece8777c925d62a26382" address="unix:///run/containerd/s/e2033d99e26382572a5e3408c8469bc3732ee49a6b8b2df890a80d3ebb48fdb1" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:23.641344 containerd[1563]: time="2025-11-05T14:59:23.641146854Z" level=info msg="connecting to shim bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025" address="unix:///run/containerd/s/bf6c6fcb7a2852b6bf4a5c0f5a4780a0b5faedfe62612e03d280c4caf1f8537c" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:23.643484 containerd[1563]: time="2025-11-05T14:59:23.643436066Z" level=info msg="connecting to shim 1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04" address="unix:///run/containerd/s/0deb092baa25dfc49203bdcb3d2a99491e1e96d96c6e8063c53ca323782ee7a4" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:23.645901 systemd[1]: Started cri-containerd-2916eeebed2ddb3b79fbd7227f6b567c79d86583ee4bece8777c925d62a26382.scope - libcontainer container 2916eeebed2ddb3b79fbd7227f6b567c79d86583ee4bece8777c925d62a26382. Nov 5 14:59:23.673845 systemd[1]: Started cri-containerd-1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04.scope - libcontainer container 1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04. Nov 5 14:59:23.677671 systemd[1]: Started cri-containerd-bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025.scope - libcontainer container bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025. Nov 5 14:59:23.684373 containerd[1563]: time="2025-11-05T14:59:23.684325031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2kb7j,Uid:5227784f-b459-4103-84a0-6a5c2cccf8ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"2916eeebed2ddb3b79fbd7227f6b567c79d86583ee4bece8777c925d62a26382\"" Nov 5 14:59:23.685236 kubelet[2692]: E1105 14:59:23.685212 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:23.701570 containerd[1563]: time="2025-11-05T14:59:23.701528998Z" level=info msg="CreateContainer within sandbox \"2916eeebed2ddb3b79fbd7227f6b567c79d86583ee4bece8777c925d62a26382\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 5 14:59:23.715414 containerd[1563]: time="2025-11-05T14:59:23.715366039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qvpr4,Uid:db52dd67-f4bc-4c78-bcc3-795a86775434,Namespace:kube-system,Attempt:0,} returns sandbox id \"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\"" Nov 5 14:59:23.716698 kubelet[2692]: E1105 14:59:23.716622 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:23.718044 containerd[1563]: time="2025-11-05T14:59:23.718017673Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 5 14:59:23.726231 containerd[1563]: time="2025-11-05T14:59:23.726194979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-cb9tm,Uid:0ae31ee7-9542-4fac-9bc4-569bb9f3010f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04\"" Nov 5 14:59:23.726898 kubelet[2692]: E1105 14:59:23.726852 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:23.727758 containerd[1563]: time="2025-11-05T14:59:23.727727649Z" level=info msg="Container 64779ae50ed8f2aa3847986676558cbe7974e1d9e9b23e7f295445a6b3afa543: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:23.736596 containerd[1563]: time="2025-11-05T14:59:23.736535706Z" level=info msg="CreateContainer within sandbox \"2916eeebed2ddb3b79fbd7227f6b567c79d86583ee4bece8777c925d62a26382\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"64779ae50ed8f2aa3847986676558cbe7974e1d9e9b23e7f295445a6b3afa543\"" Nov 5 14:59:23.741292 containerd[1563]: time="2025-11-05T14:59:23.741254311Z" level=info msg="StartContainer for \"64779ae50ed8f2aa3847986676558cbe7974e1d9e9b23e7f295445a6b3afa543\"" Nov 5 14:59:23.749618 containerd[1563]: time="2025-11-05T14:59:23.749489695Z" level=info msg="connecting to shim 64779ae50ed8f2aa3847986676558cbe7974e1d9e9b23e7f295445a6b3afa543" address="unix:///run/containerd/s/e2033d99e26382572a5e3408c8469bc3732ee49a6b8b2df890a80d3ebb48fdb1" protocol=ttrpc version=3 Nov 5 14:59:23.773863 systemd[1]: Started cri-containerd-64779ae50ed8f2aa3847986676558cbe7974e1d9e9b23e7f295445a6b3afa543.scope - libcontainer container 64779ae50ed8f2aa3847986676558cbe7974e1d9e9b23e7f295445a6b3afa543. Nov 5 14:59:23.810718 containerd[1563]: time="2025-11-05T14:59:23.810627861Z" level=info msg="StartContainer for \"64779ae50ed8f2aa3847986676558cbe7974e1d9e9b23e7f295445a6b3afa543\" returns successfully" Nov 5 14:59:24.437195 kubelet[2692]: E1105 14:59:24.437155 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:24.449595 kubelet[2692]: I1105 14:59:24.449535 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2kb7j" podStartSLOduration=1.44951922 podStartE2EDuration="1.44951922s" podCreationTimestamp="2025-11-05 14:59:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:59:24.449332536 +0000 UTC m=+7.131676427" watchObservedRunningTime="2025-11-05 14:59:24.44951922 +0000 UTC m=+7.131863111" Nov 5 14:59:25.123185 kubelet[2692]: E1105 14:59:25.123075 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:25.355450 kubelet[2692]: E1105 14:59:25.355390 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:25.439080 kubelet[2692]: E1105 14:59:25.438593 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:25.439080 kubelet[2692]: E1105 14:59:25.438762 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:26.440509 kubelet[2692]: E1105 14:59:26.440411 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:27.567437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2571278700.mount: Deactivated successfully. Nov 5 14:59:29.000483 containerd[1563]: time="2025-11-05T14:59:29.000433354Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:29.001445 containerd[1563]: time="2025-11-05T14:59:29.000966533Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Nov 5 14:59:29.001867 containerd[1563]: time="2025-11-05T14:59:29.001837290Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:29.003782 containerd[1563]: time="2025-11-05T14:59:29.003373648Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 5.28532443s" Nov 5 14:59:29.003890 containerd[1563]: time="2025-11-05T14:59:29.003873138Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 5 14:59:29.007157 containerd[1563]: time="2025-11-05T14:59:29.007128046Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 5 14:59:29.011146 containerd[1563]: time="2025-11-05T14:59:29.011114006Z" level=info msg="CreateContainer within sandbox \"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 5 14:59:29.018843 containerd[1563]: time="2025-11-05T14:59:29.018798433Z" level=info msg="Container 14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:29.023752 containerd[1563]: time="2025-11-05T14:59:29.023668473Z" level=info msg="CreateContainer within sandbox \"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4\"" Nov 5 14:59:29.024452 containerd[1563]: time="2025-11-05T14:59:29.024415488Z" level=info msg="StartContainer for \"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4\"" Nov 5 14:59:29.025221 containerd[1563]: time="2025-11-05T14:59:29.025174505Z" level=info msg="connecting to shim 14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4" address="unix:///run/containerd/s/bf6c6fcb7a2852b6bf4a5c0f5a4780a0b5faedfe62612e03d280c4caf1f8537c" protocol=ttrpc version=3 Nov 5 14:59:29.065874 systemd[1]: Started cri-containerd-14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4.scope - libcontainer container 14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4. Nov 5 14:59:29.096045 containerd[1563]: time="2025-11-05T14:59:29.095996176Z" level=info msg="StartContainer for \"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4\" returns successfully" Nov 5 14:59:29.108906 systemd[1]: cri-containerd-14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4.scope: Deactivated successfully. Nov 5 14:59:29.142991 containerd[1563]: time="2025-11-05T14:59:29.142938133Z" level=info msg="received exit event container_id:\"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4\" id:\"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4\" pid:3123 exited_at:{seconds:1762354769 nanos:132063049}" Nov 5 14:59:29.143290 containerd[1563]: time="2025-11-05T14:59:29.143256671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4\" id:\"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4\" pid:3123 exited_at:{seconds:1762354769 nanos:132063049}" Nov 5 14:59:29.175916 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4-rootfs.mount: Deactivated successfully. Nov 5 14:59:29.449344 kubelet[2692]: E1105 14:59:29.449315 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:29.454726 containerd[1563]: time="2025-11-05T14:59:29.454299686Z" level=info msg="CreateContainer within sandbox \"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 5 14:59:29.462715 containerd[1563]: time="2025-11-05T14:59:29.462653475Z" level=info msg="Container 47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:29.467292 containerd[1563]: time="2025-11-05T14:59:29.467249985Z" level=info msg="CreateContainer within sandbox \"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05\"" Nov 5 14:59:29.468895 containerd[1563]: time="2025-11-05T14:59:29.468868478Z" level=info msg="StartContainer for \"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05\"" Nov 5 14:59:29.470005 containerd[1563]: time="2025-11-05T14:59:29.469941511Z" level=info msg="connecting to shim 47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05" address="unix:///run/containerd/s/bf6c6fcb7a2852b6bf4a5c0f5a4780a0b5faedfe62612e03d280c4caf1f8537c" protocol=ttrpc version=3 Nov 5 14:59:29.502958 systemd[1]: Started cri-containerd-47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05.scope - libcontainer container 47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05. Nov 5 14:59:29.533012 containerd[1563]: time="2025-11-05T14:59:29.532971855Z" level=info msg="StartContainer for \"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05\" returns successfully" Nov 5 14:59:29.546014 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 5 14:59:29.546242 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 5 14:59:29.546309 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 5 14:59:29.548270 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 5 14:59:29.549356 systemd[1]: cri-containerd-47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05.scope: Deactivated successfully. Nov 5 14:59:29.549847 containerd[1563]: time="2025-11-05T14:59:29.549721960Z" level=info msg="received exit event container_id:\"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05\" id:\"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05\" pid:3168 exited_at:{seconds:1762354769 nanos:549257196}" Nov 5 14:59:29.549847 containerd[1563]: time="2025-11-05T14:59:29.549814777Z" level=info msg="TaskExit event in podsandbox handler container_id:\"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05\" id:\"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05\" pid:3168 exited_at:{seconds:1762354769 nanos:549257196}" Nov 5 14:59:29.587918 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 5 14:59:30.308228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3651387555.mount: Deactivated successfully. Nov 5 14:59:30.455413 kubelet[2692]: E1105 14:59:30.455349 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:30.459585 containerd[1563]: time="2025-11-05T14:59:30.459023372Z" level=info msg="CreateContainer within sandbox \"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 5 14:59:30.496760 containerd[1563]: time="2025-11-05T14:59:30.496646885Z" level=info msg="Container e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:30.500287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2843190129.mount: Deactivated successfully. Nov 5 14:59:30.514997 containerd[1563]: time="2025-11-05T14:59:30.514838235Z" level=info msg="CreateContainer within sandbox \"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1\"" Nov 5 14:59:30.515650 containerd[1563]: time="2025-11-05T14:59:30.515567440Z" level=info msg="StartContainer for \"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1\"" Nov 5 14:59:30.517365 containerd[1563]: time="2025-11-05T14:59:30.517273171Z" level=info msg="connecting to shim e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1" address="unix:///run/containerd/s/bf6c6fcb7a2852b6bf4a5c0f5a4780a0b5faedfe62612e03d280c4caf1f8537c" protocol=ttrpc version=3 Nov 5 14:59:30.542001 systemd[1]: Started cri-containerd-e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1.scope - libcontainer container e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1. Nov 5 14:59:30.596338 containerd[1563]: time="2025-11-05T14:59:30.596220189Z" level=info msg="StartContainer for \"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1\" returns successfully" Nov 5 14:59:30.598044 systemd[1]: cri-containerd-e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1.scope: Deactivated successfully. Nov 5 14:59:30.598467 systemd[1]: cri-containerd-e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1.scope: Consumed 30ms CPU time, 7.2M memory peak, 6M read from disk. Nov 5 14:59:30.622508 containerd[1563]: time="2025-11-05T14:59:30.622349897Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1\" id:\"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1\" pid:3221 exited_at:{seconds:1762354770 nanos:607447309}" Nov 5 14:59:30.622635 containerd[1563]: time="2025-11-05T14:59:30.622580297Z" level=info msg="received exit event container_id:\"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1\" id:\"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1\" pid:3221 exited_at:{seconds:1762354770 nanos:607447309}" Nov 5 14:59:31.012397 kubelet[2692]: E1105 14:59:31.012364 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:31.460389 kubelet[2692]: E1105 14:59:31.460229 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:31.465267 containerd[1563]: time="2025-11-05T14:59:31.465215460Z" level=info msg="CreateContainer within sandbox \"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 5 14:59:31.487833 containerd[1563]: time="2025-11-05T14:59:31.487773673Z" level=info msg="Container d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:31.493160 containerd[1563]: time="2025-11-05T14:59:31.493076772Z" level=info msg="CreateContainer within sandbox \"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d\"" Nov 5 14:59:31.493711 containerd[1563]: time="2025-11-05T14:59:31.493539567Z" level=info msg="StartContainer for \"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d\"" Nov 5 14:59:31.496032 containerd[1563]: time="2025-11-05T14:59:31.495999965Z" level=info msg="connecting to shim d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d" address="unix:///run/containerd/s/bf6c6fcb7a2852b6bf4a5c0f5a4780a0b5faedfe62612e03d280c4caf1f8537c" protocol=ttrpc version=3 Nov 5 14:59:31.519869 systemd[1]: Started cri-containerd-d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d.scope - libcontainer container d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d. Nov 5 14:59:31.544129 systemd[1]: cri-containerd-d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d.scope: Deactivated successfully. Nov 5 14:59:31.546373 containerd[1563]: time="2025-11-05T14:59:31.546287750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d\" id:\"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d\" pid:3260 exited_at:{seconds:1762354771 nanos:545993942}" Nov 5 14:59:31.546559 containerd[1563]: time="2025-11-05T14:59:31.546521427Z" level=info msg="received exit event container_id:\"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d\" id:\"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d\" pid:3260 exited_at:{seconds:1762354771 nanos:545993942}" Nov 5 14:59:31.554040 containerd[1563]: time="2025-11-05T14:59:31.553981556Z" level=info msg="StartContainer for \"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d\" returns successfully" Nov 5 14:59:31.565259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d-rootfs.mount: Deactivated successfully. Nov 5 14:59:32.467202 kubelet[2692]: E1105 14:59:32.467019 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:32.471625 containerd[1563]: time="2025-11-05T14:59:32.471585825Z" level=info msg="CreateContainer within sandbox \"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 5 14:59:32.494947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2681335694.mount: Deactivated successfully. Nov 5 14:59:32.500955 containerd[1563]: time="2025-11-05T14:59:32.500912567Z" level=info msg="Container 1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:32.501848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4107513713.mount: Deactivated successfully. Nov 5 14:59:32.509751 containerd[1563]: time="2025-11-05T14:59:32.509669111Z" level=info msg="CreateContainer within sandbox \"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\"" Nov 5 14:59:32.511429 containerd[1563]: time="2025-11-05T14:59:32.510936505Z" level=info msg="StartContainer for \"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\"" Nov 5 14:59:32.512763 containerd[1563]: time="2025-11-05T14:59:32.512723459Z" level=info msg="connecting to shim 1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41" address="unix:///run/containerd/s/bf6c6fcb7a2852b6bf4a5c0f5a4780a0b5faedfe62612e03d280c4caf1f8537c" protocol=ttrpc version=3 Nov 5 14:59:32.542867 systemd[1]: Started cri-containerd-1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41.scope - libcontainer container 1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41. Nov 5 14:59:32.579129 containerd[1563]: time="2025-11-05T14:59:32.579036518Z" level=info msg="StartContainer for \"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\" returns successfully" Nov 5 14:59:32.692047 containerd[1563]: time="2025-11-05T14:59:32.691994017Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\" id:\"8bda01e33ef795ccbc03a5a7ff47e477b7db2c231046286931b15a0bbbacfdbc\" pid:3339 exited_at:{seconds:1762354772 nanos:691649804}" Nov 5 14:59:32.753922 containerd[1563]: time="2025-11-05T14:59:32.753800384Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:32.754635 containerd[1563]: time="2025-11-05T14:59:32.754403196Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Nov 5 14:59:32.755864 containerd[1563]: time="2025-11-05T14:59:32.755831616Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 5 14:59:32.756213 kubelet[2692]: I1105 14:59:32.756103 2692 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 5 14:59:32.761704 containerd[1563]: time="2025-11-05T14:59:32.760975285Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.753715736s" Nov 5 14:59:32.761704 containerd[1563]: time="2025-11-05T14:59:32.761036255Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 5 14:59:32.765749 containerd[1563]: time="2025-11-05T14:59:32.765666445Z" level=info msg="CreateContainer within sandbox \"1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 5 14:59:32.774549 containerd[1563]: time="2025-11-05T14:59:32.774487919Z" level=info msg="Container c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:32.782532 containerd[1563]: time="2025-11-05T14:59:32.782475625Z" level=info msg="CreateContainer within sandbox \"1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\"" Nov 5 14:59:32.784722 containerd[1563]: time="2025-11-05T14:59:32.783385405Z" level=info msg="StartContainer for \"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\"" Nov 5 14:59:32.784722 containerd[1563]: time="2025-11-05T14:59:32.784381798Z" level=info msg="connecting to shim c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917" address="unix:///run/containerd/s/0deb092baa25dfc49203bdcb3d2a99491e1e96d96c6e8063c53ca323782ee7a4" protocol=ttrpc version=3 Nov 5 14:59:32.800981 systemd[1]: Created slice kubepods-burstable-pod8671f4e0_ea5b_4949_baf8_564c6dfabbea.slice - libcontainer container kubepods-burstable-pod8671f4e0_ea5b_4949_baf8_564c6dfabbea.slice. Nov 5 14:59:32.810369 systemd[1]: Created slice kubepods-burstable-podbd339b25_405c_4df8_a85c_5bf9b78d25ac.slice - libcontainer container kubepods-burstable-podbd339b25_405c_4df8_a85c_5bf9b78d25ac.slice. Nov 5 14:59:32.824892 systemd[1]: Started cri-containerd-c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917.scope - libcontainer container c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917. Nov 5 14:59:32.856599 containerd[1563]: time="2025-11-05T14:59:32.856409174Z" level=info msg="StartContainer for \"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\" returns successfully" Nov 5 14:59:32.900195 kubelet[2692]: I1105 14:59:32.900038 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd339b25-405c-4df8-a85c-5bf9b78d25ac-config-volume\") pod \"coredns-668d6bf9bc-lw48k\" (UID: \"bd339b25-405c-4df8-a85c-5bf9b78d25ac\") " pod="kube-system/coredns-668d6bf9bc-lw48k" Nov 5 14:59:32.900195 kubelet[2692]: I1105 14:59:32.900083 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpg6m\" (UniqueName: \"kubernetes.io/projected/bd339b25-405c-4df8-a85c-5bf9b78d25ac-kube-api-access-bpg6m\") pod \"coredns-668d6bf9bc-lw48k\" (UID: \"bd339b25-405c-4df8-a85c-5bf9b78d25ac\") " pod="kube-system/coredns-668d6bf9bc-lw48k" Nov 5 14:59:32.900195 kubelet[2692]: I1105 14:59:32.900101 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8671f4e0-ea5b-4949-baf8-564c6dfabbea-config-volume\") pod \"coredns-668d6bf9bc-ctbwd\" (UID: \"8671f4e0-ea5b-4949-baf8-564c6dfabbea\") " pod="kube-system/coredns-668d6bf9bc-ctbwd" Nov 5 14:59:32.900195 kubelet[2692]: I1105 14:59:32.900120 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7snd\" (UniqueName: \"kubernetes.io/projected/8671f4e0-ea5b-4949-baf8-564c6dfabbea-kube-api-access-l7snd\") pod \"coredns-668d6bf9bc-ctbwd\" (UID: \"8671f4e0-ea5b-4949-baf8-564c6dfabbea\") " pod="kube-system/coredns-668d6bf9bc-ctbwd" Nov 5 14:59:33.108174 kubelet[2692]: E1105 14:59:33.108003 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:33.109035 containerd[1563]: time="2025-11-05T14:59:33.108999575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ctbwd,Uid:8671f4e0-ea5b-4949-baf8-564c6dfabbea,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:33.113805 kubelet[2692]: E1105 14:59:33.113657 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:33.114573 containerd[1563]: time="2025-11-05T14:59:33.114537461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lw48k,Uid:bd339b25-405c-4df8-a85c-5bf9b78d25ac,Namespace:kube-system,Attempt:0,}" Nov 5 14:59:33.475020 kubelet[2692]: E1105 14:59:33.474898 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:33.490790 kubelet[2692]: E1105 14:59:33.490743 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:33.506704 kubelet[2692]: I1105 14:59:33.505986 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-cb9tm" podStartSLOduration=1.4698526840000001 podStartE2EDuration="10.505967959s" podCreationTimestamp="2025-11-05 14:59:23 +0000 UTC" firstStartedPulling="2025-11-05 14:59:23.72740224 +0000 UTC m=+6.409746131" lastFinishedPulling="2025-11-05 14:59:32.763517515 +0000 UTC m=+15.445861406" observedRunningTime="2025-11-05 14:59:33.504210824 +0000 UTC m=+16.186554715" watchObservedRunningTime="2025-11-05 14:59:33.505967959 +0000 UTC m=+16.188311850" Nov 5 14:59:33.533244 kubelet[2692]: I1105 14:59:33.533178 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qvpr4" podStartSLOduration=5.243691391 podStartE2EDuration="10.533158517s" podCreationTimestamp="2025-11-05 14:59:23 +0000 UTC" firstStartedPulling="2025-11-05 14:59:23.717534977 +0000 UTC m=+6.399878868" lastFinishedPulling="2025-11-05 14:59:29.007002103 +0000 UTC m=+11.689345994" observedRunningTime="2025-11-05 14:59:33.533010056 +0000 UTC m=+16.215353947" watchObservedRunningTime="2025-11-05 14:59:33.533158517 +0000 UTC m=+16.215502409" Nov 5 14:59:34.493861 kubelet[2692]: E1105 14:59:34.493810 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:34.494245 kubelet[2692]: E1105 14:59:34.493929 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:35.495443 kubelet[2692]: E1105 14:59:35.495403 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:36.496939 kubelet[2692]: E1105 14:59:36.496893 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:36.744753 systemd-networkd[1476]: cilium_host: Link UP Nov 5 14:59:36.744881 systemd-networkd[1476]: cilium_net: Link UP Nov 5 14:59:36.745017 systemd-networkd[1476]: cilium_net: Gained carrier Nov 5 14:59:36.745140 systemd-networkd[1476]: cilium_host: Gained carrier Nov 5 14:59:36.844390 systemd-networkd[1476]: cilium_vxlan: Link UP Nov 5 14:59:36.844398 systemd-networkd[1476]: cilium_vxlan: Gained carrier Nov 5 14:59:36.865803 update_engine[1546]: I20251105 14:59:36.865728 1546 update_attempter.cc:509] Updating boot flags... Nov 5 14:59:37.110723 kernel: NET: Registered PF_ALG protocol family Nov 5 14:59:37.486467 systemd-networkd[1476]: cilium_net: Gained IPv6LL Nov 5 14:59:37.486768 systemd-networkd[1476]: cilium_host: Gained IPv6LL Nov 5 14:59:37.728050 systemd-networkd[1476]: lxc_health: Link UP Nov 5 14:59:37.728938 systemd-networkd[1476]: lxc_health: Gained carrier Nov 5 14:59:38.179727 kernel: eth0: renamed from tmp867bf Nov 5 14:59:38.179758 systemd-networkd[1476]: lxc14d6ae89ae82: Link UP Nov 5 14:59:38.196626 kernel: eth0: renamed from tmp334d0 Nov 5 14:59:38.194208 systemd-networkd[1476]: lxc14d6ae89ae82: Gained carrier Nov 5 14:59:38.194484 systemd-networkd[1476]: lxcfce1da82dca7: Link UP Nov 5 14:59:38.194715 systemd-networkd[1476]: cilium_vxlan: Gained IPv6LL Nov 5 14:59:38.200196 systemd-networkd[1476]: lxcfce1da82dca7: Gained carrier Nov 5 14:59:39.469270 systemd-networkd[1476]: lxc_health: Gained IPv6LL Nov 5 14:59:39.531831 systemd-networkd[1476]: lxcfce1da82dca7: Gained IPv6LL Nov 5 14:59:39.613340 kubelet[2692]: E1105 14:59:39.613232 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:40.043875 systemd-networkd[1476]: lxc14d6ae89ae82: Gained IPv6LL Nov 5 14:59:40.506662 kubelet[2692]: E1105 14:59:40.506616 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:41.783712 containerd[1563]: time="2025-11-05T14:59:41.783531920Z" level=info msg="connecting to shim 867bf1d94e485c92fdc1f60883b21d22f1461aeb58147b5dac6180754a713489" address="unix:///run/containerd/s/91317016e840c5a1e7cfffecdec687c54167c35ca6cfe194abfaa856193706de" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:41.785103 containerd[1563]: time="2025-11-05T14:59:41.784956619Z" level=info msg="connecting to shim 334d0c1d06bccc5e62d1eb9c8ae14549cc2e33a64311d2ae0b90faf4fb61589d" address="unix:///run/containerd/s/e47f01c1ed86d0126d907f3e405cc8ad4185dcf8a5c1584005add1f3cdd79212" namespace=k8s.io protocol=ttrpc version=3 Nov 5 14:59:41.815904 systemd[1]: Started cri-containerd-867bf1d94e485c92fdc1f60883b21d22f1461aeb58147b5dac6180754a713489.scope - libcontainer container 867bf1d94e485c92fdc1f60883b21d22f1461aeb58147b5dac6180754a713489. Nov 5 14:59:41.819127 systemd[1]: Started cri-containerd-334d0c1d06bccc5e62d1eb9c8ae14549cc2e33a64311d2ae0b90faf4fb61589d.scope - libcontainer container 334d0c1d06bccc5e62d1eb9c8ae14549cc2e33a64311d2ae0b90faf4fb61589d. Nov 5 14:59:41.834856 systemd-resolved[1272]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:59:41.835533 systemd-resolved[1272]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 5 14:59:41.859718 containerd[1563]: time="2025-11-05T14:59:41.859657429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-lw48k,Uid:bd339b25-405c-4df8-a85c-5bf9b78d25ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"867bf1d94e485c92fdc1f60883b21d22f1461aeb58147b5dac6180754a713489\"" Nov 5 14:59:41.862982 kubelet[2692]: E1105 14:59:41.862935 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:41.871408 containerd[1563]: time="2025-11-05T14:59:41.871366851Z" level=info msg="CreateContainer within sandbox \"867bf1d94e485c92fdc1f60883b21d22f1461aeb58147b5dac6180754a713489\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 14:59:41.906021 containerd[1563]: time="2025-11-05T14:59:41.905975669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-ctbwd,Uid:8671f4e0-ea5b-4949-baf8-564c6dfabbea,Namespace:kube-system,Attempt:0,} returns sandbox id \"334d0c1d06bccc5e62d1eb9c8ae14549cc2e33a64311d2ae0b90faf4fb61589d\"" Nov 5 14:59:41.908456 kubelet[2692]: E1105 14:59:41.908408 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:41.910354 containerd[1563]: time="2025-11-05T14:59:41.910318613Z" level=info msg="CreateContainer within sandbox \"334d0c1d06bccc5e62d1eb9c8ae14549cc2e33a64311d2ae0b90faf4fb61589d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 5 14:59:41.932634 containerd[1563]: time="2025-11-05T14:59:41.932515179Z" level=info msg="Container 396e5025bc8600257af76c390350b4182f0074fca5be7c50ae8e8bb62e5a4969: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:41.938447 containerd[1563]: time="2025-11-05T14:59:41.938397153Z" level=info msg="Container c6b794a3cf12f89aac80b5513b67396b1d6375af1a993f67e0eb0845a4437f62: CDI devices from CRI Config.CDIDevices: []" Nov 5 14:59:41.944579 containerd[1563]: time="2025-11-05T14:59:41.944529831Z" level=info msg="CreateContainer within sandbox \"867bf1d94e485c92fdc1f60883b21d22f1461aeb58147b5dac6180754a713489\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"396e5025bc8600257af76c390350b4182f0074fca5be7c50ae8e8bb62e5a4969\"" Nov 5 14:59:41.945410 containerd[1563]: time="2025-11-05T14:59:41.945373714Z" level=info msg="StartContainer for \"396e5025bc8600257af76c390350b4182f0074fca5be7c50ae8e8bb62e5a4969\"" Nov 5 14:59:41.946611 containerd[1563]: time="2025-11-05T14:59:41.946464420Z" level=info msg="connecting to shim 396e5025bc8600257af76c390350b4182f0074fca5be7c50ae8e8bb62e5a4969" address="unix:///run/containerd/s/91317016e840c5a1e7cfffecdec687c54167c35ca6cfe194abfaa856193706de" protocol=ttrpc version=3 Nov 5 14:59:41.949125 containerd[1563]: time="2025-11-05T14:59:41.949085836Z" level=info msg="CreateContainer within sandbox \"334d0c1d06bccc5e62d1eb9c8ae14549cc2e33a64311d2ae0b90faf4fb61589d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c6b794a3cf12f89aac80b5513b67396b1d6375af1a993f67e0eb0845a4437f62\"" Nov 5 14:59:41.950843 containerd[1563]: time="2025-11-05T14:59:41.950798003Z" level=info msg="StartContainer for \"c6b794a3cf12f89aac80b5513b67396b1d6375af1a993f67e0eb0845a4437f62\"" Nov 5 14:59:41.953548 containerd[1563]: time="2025-11-05T14:59:41.953484705Z" level=info msg="connecting to shim c6b794a3cf12f89aac80b5513b67396b1d6375af1a993f67e0eb0845a4437f62" address="unix:///run/containerd/s/e47f01c1ed86d0126d907f3e405cc8ad4185dcf8a5c1584005add1f3cdd79212" protocol=ttrpc version=3 Nov 5 14:59:41.977902 systemd[1]: Started cri-containerd-c6b794a3cf12f89aac80b5513b67396b1d6375af1a993f67e0eb0845a4437f62.scope - libcontainer container c6b794a3cf12f89aac80b5513b67396b1d6375af1a993f67e0eb0845a4437f62. Nov 5 14:59:41.981030 systemd[1]: Started cri-containerd-396e5025bc8600257af76c390350b4182f0074fca5be7c50ae8e8bb62e5a4969.scope - libcontainer container 396e5025bc8600257af76c390350b4182f0074fca5be7c50ae8e8bb62e5a4969. Nov 5 14:59:42.011885 containerd[1563]: time="2025-11-05T14:59:42.011709303Z" level=info msg="StartContainer for \"396e5025bc8600257af76c390350b4182f0074fca5be7c50ae8e8bb62e5a4969\" returns successfully" Nov 5 14:59:42.040468 containerd[1563]: time="2025-11-05T14:59:42.040361172Z" level=info msg="StartContainer for \"c6b794a3cf12f89aac80b5513b67396b1d6375af1a993f67e0eb0845a4437f62\" returns successfully" Nov 5 14:59:42.182642 systemd[1]: Started sshd@7-10.0.0.22:22-10.0.0.1:39514.service - OpenSSH per-connection server daemon (10.0.0.1:39514). Nov 5 14:59:42.247914 sshd[4043]: Accepted publickey for core from 10.0.0.1 port 39514 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:59:42.249448 sshd-session[4043]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:59:42.254930 systemd-logind[1544]: New session 8 of user core. Nov 5 14:59:42.262898 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 5 14:59:42.393969 sshd[4046]: Connection closed by 10.0.0.1 port 39514 Nov 5 14:59:42.394553 sshd-session[4043]: pam_unix(sshd:session): session closed for user core Nov 5 14:59:42.398278 systemd[1]: sshd@7-10.0.0.22:22-10.0.0.1:39514.service: Deactivated successfully. Nov 5 14:59:42.400059 systemd[1]: session-8.scope: Deactivated successfully. Nov 5 14:59:42.400735 systemd-logind[1544]: Session 8 logged out. Waiting for processes to exit. Nov 5 14:59:42.401656 systemd-logind[1544]: Removed session 8. Nov 5 14:59:42.511987 kubelet[2692]: E1105 14:59:42.511948 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:42.522910 kubelet[2692]: E1105 14:59:42.522835 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:42.527315 kubelet[2692]: I1105 14:59:42.527263 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-lw48k" podStartSLOduration=19.527243005 podStartE2EDuration="19.527243005s" podCreationTimestamp="2025-11-05 14:59:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:59:42.526208829 +0000 UTC m=+25.208552760" watchObservedRunningTime="2025-11-05 14:59:42.527243005 +0000 UTC m=+25.209586896" Nov 5 14:59:42.556916 kubelet[2692]: I1105 14:59:42.556837 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-ctbwd" podStartSLOduration=19.55681636 podStartE2EDuration="19.55681636s" podCreationTimestamp="2025-11-05 14:59:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 14:59:42.556117295 +0000 UTC m=+25.238461186" watchObservedRunningTime="2025-11-05 14:59:42.55681636 +0000 UTC m=+25.239160251" Nov 5 14:59:43.523836 kubelet[2692]: E1105 14:59:43.523809 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:43.525511 kubelet[2692]: E1105 14:59:43.525487 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:44.525599 kubelet[2692]: E1105 14:59:44.525453 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:44.525599 kubelet[2692]: E1105 14:59:44.525535 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 14:59:47.404551 systemd[1]: Started sshd@8-10.0.0.22:22-10.0.0.1:39564.service - OpenSSH per-connection server daemon (10.0.0.1:39564). Nov 5 14:59:47.473553 sshd[4069]: Accepted publickey for core from 10.0.0.1 port 39564 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:59:47.477601 sshd-session[4069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:59:47.487340 systemd-logind[1544]: New session 9 of user core. Nov 5 14:59:47.496888 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 5 14:59:47.631126 sshd[4072]: Connection closed by 10.0.0.1 port 39564 Nov 5 14:59:47.631770 sshd-session[4069]: pam_unix(sshd:session): session closed for user core Nov 5 14:59:47.635738 systemd[1]: sshd@8-10.0.0.22:22-10.0.0.1:39564.service: Deactivated successfully. Nov 5 14:59:47.638453 systemd[1]: session-9.scope: Deactivated successfully. Nov 5 14:59:47.639535 systemd-logind[1544]: Session 9 logged out. Waiting for processes to exit. Nov 5 14:59:47.640828 systemd-logind[1544]: Removed session 9. Nov 5 14:59:52.648026 systemd[1]: Started sshd@9-10.0.0.22:22-10.0.0.1:41570.service - OpenSSH per-connection server daemon (10.0.0.1:41570). Nov 5 14:59:52.717866 sshd[4086]: Accepted publickey for core from 10.0.0.1 port 41570 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:59:52.719153 sshd-session[4086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:59:52.723483 systemd-logind[1544]: New session 10 of user core. Nov 5 14:59:52.733852 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 5 14:59:52.870013 sshd[4089]: Connection closed by 10.0.0.1 port 41570 Nov 5 14:59:52.870495 sshd-session[4086]: pam_unix(sshd:session): session closed for user core Nov 5 14:59:52.874359 systemd[1]: sshd@9-10.0.0.22:22-10.0.0.1:41570.service: Deactivated successfully. Nov 5 14:59:52.876352 systemd[1]: session-10.scope: Deactivated successfully. Nov 5 14:59:52.877320 systemd-logind[1544]: Session 10 logged out. Waiting for processes to exit. Nov 5 14:59:52.878498 systemd-logind[1544]: Removed session 10. Nov 5 14:59:57.891372 systemd[1]: Started sshd@10-10.0.0.22:22-10.0.0.1:41598.service - OpenSSH per-connection server daemon (10.0.0.1:41598). Nov 5 14:59:57.951681 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 41598 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:59:57.953471 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:59:57.958976 systemd-logind[1544]: New session 11 of user core. Nov 5 14:59:57.972881 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 5 14:59:58.083078 sshd[4109]: Connection closed by 10.0.0.1 port 41598 Nov 5 14:59:58.083415 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Nov 5 14:59:58.097412 systemd[1]: sshd@10-10.0.0.22:22-10.0.0.1:41598.service: Deactivated successfully. Nov 5 14:59:58.099093 systemd[1]: session-11.scope: Deactivated successfully. Nov 5 14:59:58.099735 systemd-logind[1544]: Session 11 logged out. Waiting for processes to exit. Nov 5 14:59:58.102585 systemd[1]: Started sshd@11-10.0.0.22:22-10.0.0.1:41628.service - OpenSSH per-connection server daemon (10.0.0.1:41628). Nov 5 14:59:58.103092 systemd-logind[1544]: Removed session 11. Nov 5 14:59:58.164163 sshd[4124]: Accepted publickey for core from 10.0.0.1 port 41628 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:59:58.165514 sshd-session[4124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:59:58.170520 systemd-logind[1544]: New session 12 of user core. Nov 5 14:59:58.184937 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 5 14:59:58.354524 sshd[4127]: Connection closed by 10.0.0.1 port 41628 Nov 5 14:59:58.354885 sshd-session[4124]: pam_unix(sshd:session): session closed for user core Nov 5 14:59:58.374376 systemd[1]: sshd@11-10.0.0.22:22-10.0.0.1:41628.service: Deactivated successfully. Nov 5 14:59:58.377663 systemd[1]: session-12.scope: Deactivated successfully. Nov 5 14:59:58.378875 systemd-logind[1544]: Session 12 logged out. Waiting for processes to exit. Nov 5 14:59:58.382737 systemd[1]: Started sshd@12-10.0.0.22:22-10.0.0.1:41636.service - OpenSSH per-connection server daemon (10.0.0.1:41636). Nov 5 14:59:58.385013 systemd-logind[1544]: Removed session 12. Nov 5 14:59:58.453125 sshd[4140]: Accepted publickey for core from 10.0.0.1 port 41636 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 14:59:58.454380 sshd-session[4140]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 14:59:58.458474 systemd-logind[1544]: New session 13 of user core. Nov 5 14:59:58.473849 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 5 14:59:58.594204 sshd[4143]: Connection closed by 10.0.0.1 port 41636 Nov 5 14:59:58.594559 sshd-session[4140]: pam_unix(sshd:session): session closed for user core Nov 5 14:59:58.598387 systemd[1]: sshd@12-10.0.0.22:22-10.0.0.1:41636.service: Deactivated successfully. Nov 5 14:59:58.600085 systemd[1]: session-13.scope: Deactivated successfully. Nov 5 14:59:58.600817 systemd-logind[1544]: Session 13 logged out. Waiting for processes to exit. Nov 5 14:59:58.601852 systemd-logind[1544]: Removed session 13. Nov 5 15:00:03.609923 systemd[1]: Started sshd@13-10.0.0.22:22-10.0.0.1:58864.service - OpenSSH per-connection server daemon (10.0.0.1:58864). Nov 5 15:00:03.672673 sshd[4156]: Accepted publickey for core from 10.0.0.1 port 58864 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:03.675058 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:03.683378 systemd-logind[1544]: New session 14 of user core. Nov 5 15:00:03.695914 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 5 15:00:03.813645 sshd[4159]: Connection closed by 10.0.0.1 port 58864 Nov 5 15:00:03.814179 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:03.817980 systemd[1]: sshd@13-10.0.0.22:22-10.0.0.1:58864.service: Deactivated successfully. Nov 5 15:00:03.819681 systemd[1]: session-14.scope: Deactivated successfully. Nov 5 15:00:03.821333 systemd-logind[1544]: Session 14 logged out. Waiting for processes to exit. Nov 5 15:00:03.822444 systemd-logind[1544]: Removed session 14. Nov 5 15:00:08.837168 systemd[1]: Started sshd@14-10.0.0.22:22-10.0.0.1:58870.service - OpenSSH per-connection server daemon (10.0.0.1:58870). Nov 5 15:00:08.906580 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 58870 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:08.907957 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:08.912972 systemd-logind[1544]: New session 15 of user core. Nov 5 15:00:08.927936 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 5 15:00:09.052058 sshd[4175]: Connection closed by 10.0.0.1 port 58870 Nov 5 15:00:09.051532 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:09.062729 systemd[1]: sshd@14-10.0.0.22:22-10.0.0.1:58870.service: Deactivated successfully. Nov 5 15:00:09.064510 systemd[1]: session-15.scope: Deactivated successfully. Nov 5 15:00:09.065389 systemd-logind[1544]: Session 15 logged out. Waiting for processes to exit. Nov 5 15:00:09.068061 systemd[1]: Started sshd@15-10.0.0.22:22-10.0.0.1:58886.service - OpenSSH per-connection server daemon (10.0.0.1:58886). Nov 5 15:00:09.069285 systemd-logind[1544]: Removed session 15. Nov 5 15:00:09.135323 sshd[4188]: Accepted publickey for core from 10.0.0.1 port 58886 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:09.137554 sshd-session[4188]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:09.143059 systemd-logind[1544]: New session 16 of user core. Nov 5 15:00:09.150942 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 5 15:00:09.364050 sshd[4191]: Connection closed by 10.0.0.1 port 58886 Nov 5 15:00:09.364530 sshd-session[4188]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:09.373009 systemd[1]: sshd@15-10.0.0.22:22-10.0.0.1:58886.service: Deactivated successfully. Nov 5 15:00:09.375712 systemd[1]: session-16.scope: Deactivated successfully. Nov 5 15:00:09.377004 systemd-logind[1544]: Session 16 logged out. Waiting for processes to exit. Nov 5 15:00:09.380585 systemd[1]: Started sshd@16-10.0.0.22:22-10.0.0.1:35604.service - OpenSSH per-connection server daemon (10.0.0.1:35604). Nov 5 15:00:09.381431 systemd-logind[1544]: Removed session 16. Nov 5 15:00:09.451807 sshd[4202]: Accepted publickey for core from 10.0.0.1 port 35604 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:09.453648 sshd-session[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:09.458294 systemd-logind[1544]: New session 17 of user core. Nov 5 15:00:09.472947 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 5 15:00:10.157068 sshd[4205]: Connection closed by 10.0.0.1 port 35604 Nov 5 15:00:10.157441 sshd-session[4202]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:10.171476 systemd[1]: sshd@16-10.0.0.22:22-10.0.0.1:35604.service: Deactivated successfully. Nov 5 15:00:10.176276 systemd[1]: session-17.scope: Deactivated successfully. Nov 5 15:00:10.177986 systemd-logind[1544]: Session 17 logged out. Waiting for processes to exit. Nov 5 15:00:10.186274 systemd[1]: Started sshd@17-10.0.0.22:22-10.0.0.1:35620.service - OpenSSH per-connection server daemon (10.0.0.1:35620). Nov 5 15:00:10.187906 systemd-logind[1544]: Removed session 17. Nov 5 15:00:10.247026 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 35620 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:10.248569 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:10.255700 systemd-logind[1544]: New session 18 of user core. Nov 5 15:00:10.264920 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 5 15:00:10.507073 sshd[4227]: Connection closed by 10.0.0.1 port 35620 Nov 5 15:00:10.507624 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:10.515466 systemd[1]: sshd@17-10.0.0.22:22-10.0.0.1:35620.service: Deactivated successfully. Nov 5 15:00:10.518630 systemd[1]: session-18.scope: Deactivated successfully. Nov 5 15:00:10.520668 systemd-logind[1544]: Session 18 logged out. Waiting for processes to exit. Nov 5 15:00:10.523413 systemd[1]: Started sshd@18-10.0.0.22:22-10.0.0.1:35636.service - OpenSSH per-connection server daemon (10.0.0.1:35636). Nov 5 15:00:10.527469 systemd-logind[1544]: Removed session 18. Nov 5 15:00:10.593011 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 35636 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:10.595774 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:10.601592 systemd-logind[1544]: New session 19 of user core. Nov 5 15:00:10.614877 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 5 15:00:10.731618 sshd[4242]: Connection closed by 10.0.0.1 port 35636 Nov 5 15:00:10.731984 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:10.735665 systemd[1]: sshd@18-10.0.0.22:22-10.0.0.1:35636.service: Deactivated successfully. Nov 5 15:00:10.738285 systemd[1]: session-19.scope: Deactivated successfully. Nov 5 15:00:10.738983 systemd-logind[1544]: Session 19 logged out. Waiting for processes to exit. Nov 5 15:00:10.741144 systemd-logind[1544]: Removed session 19. Nov 5 15:00:15.748200 systemd[1]: Started sshd@19-10.0.0.22:22-10.0.0.1:35646.service - OpenSSH per-connection server daemon (10.0.0.1:35646). Nov 5 15:00:15.810796 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 35646 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:15.812105 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:15.816693 systemd-logind[1544]: New session 20 of user core. Nov 5 15:00:15.829933 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 5 15:00:15.945233 sshd[4261]: Connection closed by 10.0.0.1 port 35646 Nov 5 15:00:15.945540 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:15.949445 systemd[1]: sshd@19-10.0.0.22:22-10.0.0.1:35646.service: Deactivated successfully. Nov 5 15:00:15.951206 systemd[1]: session-20.scope: Deactivated successfully. Nov 5 15:00:15.952236 systemd-logind[1544]: Session 20 logged out. Waiting for processes to exit. Nov 5 15:00:15.953539 systemd-logind[1544]: Removed session 20. Nov 5 15:00:20.961261 systemd[1]: Started sshd@20-10.0.0.22:22-10.0.0.1:58874.service - OpenSSH per-connection server daemon (10.0.0.1:58874). Nov 5 15:00:21.007014 sshd[4278]: Accepted publickey for core from 10.0.0.1 port 58874 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:21.008300 sshd-session[4278]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:21.012657 systemd-logind[1544]: New session 21 of user core. Nov 5 15:00:21.019858 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 5 15:00:21.127055 sshd[4281]: Connection closed by 10.0.0.1 port 58874 Nov 5 15:00:21.127445 sshd-session[4278]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:21.130955 systemd[1]: sshd@20-10.0.0.22:22-10.0.0.1:58874.service: Deactivated successfully. Nov 5 15:00:21.133133 systemd[1]: session-21.scope: Deactivated successfully. Nov 5 15:00:21.133853 systemd-logind[1544]: Session 21 logged out. Waiting for processes to exit. Nov 5 15:00:21.134956 systemd-logind[1544]: Removed session 21. Nov 5 15:00:26.139485 systemd[1]: Started sshd@21-10.0.0.22:22-10.0.0.1:58878.service - OpenSSH per-connection server daemon (10.0.0.1:58878). Nov 5 15:00:26.197501 sshd[4296]: Accepted publickey for core from 10.0.0.1 port 58878 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:26.199560 sshd-session[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:26.206584 systemd-logind[1544]: New session 22 of user core. Nov 5 15:00:26.218564 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 5 15:00:26.332077 sshd[4299]: Connection closed by 10.0.0.1 port 58878 Nov 5 15:00:26.332409 sshd-session[4296]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:26.337066 systemd[1]: sshd@21-10.0.0.22:22-10.0.0.1:58878.service: Deactivated successfully. Nov 5 15:00:26.338512 systemd[1]: session-22.scope: Deactivated successfully. Nov 5 15:00:26.340678 systemd-logind[1544]: Session 22 logged out. Waiting for processes to exit. Nov 5 15:00:26.342461 systemd-logind[1544]: Removed session 22. Nov 5 15:00:29.421897 kubelet[2692]: E1105 15:00:29.421835 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:31.351143 systemd[1]: Started sshd@22-10.0.0.22:22-10.0.0.1:35786.service - OpenSSH per-connection server daemon (10.0.0.1:35786). Nov 5 15:00:31.421608 sshd[4313]: Accepted publickey for core from 10.0.0.1 port 35786 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:31.422967 sshd-session[4313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:31.428097 systemd-logind[1544]: New session 23 of user core. Nov 5 15:00:31.441844 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 5 15:00:31.569976 sshd[4316]: Connection closed by 10.0.0.1 port 35786 Nov 5 15:00:31.570706 sshd-session[4313]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:31.574629 systemd-logind[1544]: Session 23 logged out. Waiting for processes to exit. Nov 5 15:00:31.575348 systemd[1]: sshd@22-10.0.0.22:22-10.0.0.1:35786.service: Deactivated successfully. Nov 5 15:00:31.577585 systemd[1]: session-23.scope: Deactivated successfully. Nov 5 15:00:31.579260 systemd-logind[1544]: Removed session 23. Nov 5 15:00:34.414011 kubelet[2692]: E1105 15:00:34.413926 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:35.414273 kubelet[2692]: E1105 15:00:35.414180 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:36.582186 systemd[1]: Started sshd@23-10.0.0.22:22-10.0.0.1:35790.service - OpenSSH per-connection server daemon (10.0.0.1:35790). Nov 5 15:00:36.648651 sshd[4329]: Accepted publickey for core from 10.0.0.1 port 35790 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:36.650397 sshd-session[4329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:36.654288 systemd-logind[1544]: New session 24 of user core. Nov 5 15:00:36.673899 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 5 15:00:36.785236 sshd[4332]: Connection closed by 10.0.0.1 port 35790 Nov 5 15:00:36.785620 sshd-session[4329]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:36.789350 systemd[1]: sshd@23-10.0.0.22:22-10.0.0.1:35790.service: Deactivated successfully. Nov 5 15:00:36.790945 systemd[1]: session-24.scope: Deactivated successfully. Nov 5 15:00:36.791662 systemd-logind[1544]: Session 24 logged out. Waiting for processes to exit. Nov 5 15:00:36.792709 systemd-logind[1544]: Removed session 24. Nov 5 15:00:41.799373 systemd[1]: Started sshd@24-10.0.0.22:22-10.0.0.1:45296.service - OpenSSH per-connection server daemon (10.0.0.1:45296). Nov 5 15:00:41.873317 sshd[4346]: Accepted publickey for core from 10.0.0.1 port 45296 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:41.874930 sshd-session[4346]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:41.882483 systemd-logind[1544]: New session 25 of user core. Nov 5 15:00:41.898967 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 5 15:00:42.025725 sshd[4349]: Connection closed by 10.0.0.1 port 45296 Nov 5 15:00:42.026014 sshd-session[4346]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:42.036236 systemd[1]: sshd@24-10.0.0.22:22-10.0.0.1:45296.service: Deactivated successfully. Nov 5 15:00:42.039908 systemd[1]: session-25.scope: Deactivated successfully. Nov 5 15:00:42.043088 systemd-logind[1544]: Session 25 logged out. Waiting for processes to exit. Nov 5 15:00:42.045074 systemd[1]: Started sshd@25-10.0.0.22:22-10.0.0.1:45312.service - OpenSSH per-connection server daemon (10.0.0.1:45312). Nov 5 15:00:42.046256 systemd-logind[1544]: Removed session 25. Nov 5 15:00:42.106183 sshd[4362]: Accepted publickey for core from 10.0.0.1 port 45312 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:42.107917 sshd-session[4362]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:42.113545 systemd-logind[1544]: New session 26 of user core. Nov 5 15:00:42.119942 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 5 15:00:43.885012 containerd[1563]: time="2025-11-05T15:00:43.884872954Z" level=info msg="StopContainer for \"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\" with timeout 30 (s)" Nov 5 15:00:43.890196 containerd[1563]: time="2025-11-05T15:00:43.890144334Z" level=info msg="Stop container \"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\" with signal terminated" Nov 5 15:00:43.920241 systemd[1]: cri-containerd-c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917.scope: Deactivated successfully. Nov 5 15:00:43.922220 containerd[1563]: time="2025-11-05T15:00:43.922160605Z" level=info msg="received exit event container_id:\"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\" id:\"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\" pid:3389 exited_at:{seconds:1762354843 nanos:921646178}" Nov 5 15:00:43.922657 containerd[1563]: time="2025-11-05T15:00:43.922518715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\" id:\"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\" pid:3389 exited_at:{seconds:1762354843 nanos:921646178}" Nov 5 15:00:43.937809 containerd[1563]: time="2025-11-05T15:00:43.937771390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\" id:\"0be5d8591d87cf2c3534b3024ec9c149d3bed88571b02427e55bb2fa30886b0e\" pid:4393 exited_at:{seconds:1762354843 nanos:937495078}" Nov 5 15:00:43.941154 containerd[1563]: time="2025-11-05T15:00:43.941115022Z" level=info msg="StopContainer for \"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\" with timeout 2 (s)" Nov 5 15:00:43.941839 containerd[1563]: time="2025-11-05T15:00:43.941792844Z" level=info msg="Stop container \"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\" with signal terminated" Nov 5 15:00:43.947230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917-rootfs.mount: Deactivated successfully. Nov 5 15:00:43.952945 systemd-networkd[1476]: lxc_health: Link DOWN Nov 5 15:00:43.953645 containerd[1563]: time="2025-11-05T15:00:43.953161862Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 5 15:00:43.952954 systemd-networkd[1476]: lxc_health: Lost carrier Nov 5 15:00:43.970115 systemd[1]: cri-containerd-1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41.scope: Deactivated successfully. Nov 5 15:00:43.970467 systemd[1]: cri-containerd-1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41.scope: Consumed 6.378s CPU time, 124.9M memory peak, 140K read from disk, 12.9M written to disk. Nov 5 15:00:43.971100 containerd[1563]: time="2025-11-05T15:00:43.970981629Z" level=info msg="received exit event container_id:\"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\" id:\"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\" pid:3307 exited_at:{seconds:1762354843 nanos:970757435}" Nov 5 15:00:43.971290 containerd[1563]: time="2025-11-05T15:00:43.971069907Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\" id:\"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\" pid:3307 exited_at:{seconds:1762354843 nanos:970757435}" Nov 5 15:00:43.989730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41-rootfs.mount: Deactivated successfully. Nov 5 15:00:44.099143 containerd[1563]: time="2025-11-05T15:00:44.098997888Z" level=info msg="StopContainer for \"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\" returns successfully" Nov 5 15:00:44.102277 containerd[1563]: time="2025-11-05T15:00:44.102223967Z" level=info msg="StopPodSandbox for \"1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04\"" Nov 5 15:00:44.108842 containerd[1563]: time="2025-11-05T15:00:44.108540088Z" level=info msg="Container to stop \"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:00:44.119180 systemd[1]: cri-containerd-1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04.scope: Deactivated successfully. Nov 5 15:00:44.120507 containerd[1563]: time="2025-11-05T15:00:44.120464868Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04\" id:\"1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04\" pid:2894 exit_status:137 exited_at:{seconds:1762354844 nanos:119713287}" Nov 5 15:00:44.143990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04-rootfs.mount: Deactivated successfully. Nov 5 15:00:44.155453 containerd[1563]: time="2025-11-05T15:00:44.155389430Z" level=info msg="StopContainer for \"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\" returns successfully" Nov 5 15:00:44.156194 containerd[1563]: time="2025-11-05T15:00:44.156102252Z" level=info msg="StopPodSandbox for \"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\"" Nov 5 15:00:44.156318 containerd[1563]: time="2025-11-05T15:00:44.156182850Z" level=info msg="Container to stop \"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:00:44.156388 containerd[1563]: time="2025-11-05T15:00:44.156375205Z" level=info msg="Container to stop \"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:00:44.156447 containerd[1563]: time="2025-11-05T15:00:44.156428004Z" level=info msg="Container to stop \"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:00:44.156522 containerd[1563]: time="2025-11-05T15:00:44.156486122Z" level=info msg="Container to stop \"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:00:44.156522 containerd[1563]: time="2025-11-05T15:00:44.156499642Z" level=info msg="Container to stop \"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 5 15:00:44.162166 systemd[1]: cri-containerd-bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025.scope: Deactivated successfully. Nov 5 15:00:44.185053 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025-rootfs.mount: Deactivated successfully. Nov 5 15:00:44.277363 containerd[1563]: time="2025-11-05T15:00:44.277320484Z" level=info msg="shim disconnected" id=bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025 namespace=k8s.io Nov 5 15:00:44.277866 containerd[1563]: time="2025-11-05T15:00:44.277359363Z" level=warning msg="cleaning up after shim disconnected" id=bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025 namespace=k8s.io Nov 5 15:00:44.277866 containerd[1563]: time="2025-11-05T15:00:44.277392722Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 5 15:00:44.277866 containerd[1563]: time="2025-11-05T15:00:44.277593237Z" level=info msg="shim disconnected" id=1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04 namespace=k8s.io Nov 5 15:00:44.277866 containerd[1563]: time="2025-11-05T15:00:44.277610396Z" level=warning msg="cleaning up after shim disconnected" id=1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04 namespace=k8s.io Nov 5 15:00:44.277866 containerd[1563]: time="2025-11-05T15:00:44.277632796Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 5 15:00:44.296950 containerd[1563]: time="2025-11-05T15:00:44.296797674Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\" id:\"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\" pid:2902 exit_status:137 exited_at:{seconds:1762354844 nanos:167934555}" Nov 5 15:00:44.297305 containerd[1563]: time="2025-11-05T15:00:44.297261262Z" level=info msg="TearDown network for sandbox \"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\" successfully" Nov 5 15:00:44.297305 containerd[1563]: time="2025-11-05T15:00:44.297291741Z" level=info msg="StopPodSandbox for \"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\" returns successfully" Nov 5 15:00:44.297430 containerd[1563]: time="2025-11-05T15:00:44.297408218Z" level=info msg="TearDown network for sandbox \"1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04\" successfully" Nov 5 15:00:44.297557 containerd[1563]: time="2025-11-05T15:00:44.297483776Z" level=info msg="StopPodSandbox for \"1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04\" returns successfully" Nov 5 15:00:44.298803 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025-shm.mount: Deactivated successfully. Nov 5 15:00:44.298912 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04-shm.mount: Deactivated successfully. Nov 5 15:00:44.302769 containerd[1563]: time="2025-11-05T15:00:44.302727445Z" level=info msg="received exit event sandbox_id:\"bfd43e874dc6dd1349ffd516c3df8e67cd74c2534a8bae01ae990a07ec0cf025\" exit_status:137 exited_at:{seconds:1762354844 nanos:167934555}" Nov 5 15:00:44.304231 containerd[1563]: time="2025-11-05T15:00:44.304197168Z" level=info msg="received exit event sandbox_id:\"1450e1ca7e690be4d19bc5c8ad678dc1e7f8439e52b6960f0b2a9e5b8dc0ce04\" exit_status:137 exited_at:{seconds:1762354844 nanos:119713287}" Nov 5 15:00:44.364345 kubelet[2692]: I1105 15:00:44.364290 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-bpf-maps\") pod \"db52dd67-f4bc-4c78-bcc3-795a86775434\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " Nov 5 15:00:44.364345 kubelet[2692]: I1105 15:00:44.364339 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-lib-modules\") pod \"db52dd67-f4bc-4c78-bcc3-795a86775434\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " Nov 5 15:00:44.364817 kubelet[2692]: I1105 15:00:44.364358 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-hostproc\") pod \"db52dd67-f4bc-4c78-bcc3-795a86775434\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " Nov 5 15:00:44.364817 kubelet[2692]: I1105 15:00:44.364389 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db52dd67-f4bc-4c78-bcc3-795a86775434-cilium-config-path\") pod \"db52dd67-f4bc-4c78-bcc3-795a86775434\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " Nov 5 15:00:44.364817 kubelet[2692]: I1105 15:00:44.364410 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0ae31ee7-9542-4fac-9bc4-569bb9f3010f-cilium-config-path\") pod \"0ae31ee7-9542-4fac-9bc4-569bb9f3010f\" (UID: \"0ae31ee7-9542-4fac-9bc4-569bb9f3010f\") " Nov 5 15:00:44.364817 kubelet[2692]: I1105 15:00:44.364426 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-cilium-run\") pod \"db52dd67-f4bc-4c78-bcc3-795a86775434\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " Nov 5 15:00:44.364817 kubelet[2692]: I1105 15:00:44.364447 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlnt8\" (UniqueName: \"kubernetes.io/projected/db52dd67-f4bc-4c78-bcc3-795a86775434-kube-api-access-hlnt8\") pod \"db52dd67-f4bc-4c78-bcc3-795a86775434\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " Nov 5 15:00:44.364817 kubelet[2692]: I1105 15:00:44.364491 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/db52dd67-f4bc-4c78-bcc3-795a86775434-clustermesh-secrets\") pod \"db52dd67-f4bc-4c78-bcc3-795a86775434\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " Nov 5 15:00:44.364950 kubelet[2692]: I1105 15:00:44.364511 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mpbd9\" (UniqueName: \"kubernetes.io/projected/0ae31ee7-9542-4fac-9bc4-569bb9f3010f-kube-api-access-mpbd9\") pod \"0ae31ee7-9542-4fac-9bc4-569bb9f3010f\" (UID: \"0ae31ee7-9542-4fac-9bc4-569bb9f3010f\") " Nov 5 15:00:44.364950 kubelet[2692]: I1105 15:00:44.364526 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-host-proc-sys-net\") pod \"db52dd67-f4bc-4c78-bcc3-795a86775434\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " Nov 5 15:00:44.364950 kubelet[2692]: I1105 15:00:44.364541 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-etc-cni-netd\") pod \"db52dd67-f4bc-4c78-bcc3-795a86775434\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " Nov 5 15:00:44.364950 kubelet[2692]: I1105 15:00:44.364555 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-cilium-cgroup\") pod \"db52dd67-f4bc-4c78-bcc3-795a86775434\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " Nov 5 15:00:44.364950 kubelet[2692]: I1105 15:00:44.364571 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-host-proc-sys-kernel\") pod \"db52dd67-f4bc-4c78-bcc3-795a86775434\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " Nov 5 15:00:44.364950 kubelet[2692]: I1105 15:00:44.364587 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-cni-path\") pod \"db52dd67-f4bc-4c78-bcc3-795a86775434\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " Nov 5 15:00:44.365073 kubelet[2692]: I1105 15:00:44.364607 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/db52dd67-f4bc-4c78-bcc3-795a86775434-hubble-tls\") pod \"db52dd67-f4bc-4c78-bcc3-795a86775434\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " Nov 5 15:00:44.365073 kubelet[2692]: I1105 15:00:44.364623 2692 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-xtables-lock\") pod \"db52dd67-f4bc-4c78-bcc3-795a86775434\" (UID: \"db52dd67-f4bc-4c78-bcc3-795a86775434\") " Nov 5 15:00:44.367585 kubelet[2692]: I1105 15:00:44.367411 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "db52dd67-f4bc-4c78-bcc3-795a86775434" (UID: "db52dd67-f4bc-4c78-bcc3-795a86775434"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:00:44.367585 kubelet[2692]: I1105 15:00:44.367476 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-hostproc" (OuterVolumeSpecName: "hostproc") pod "db52dd67-f4bc-4c78-bcc3-795a86775434" (UID: "db52dd67-f4bc-4c78-bcc3-795a86775434"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:00:44.367916 kubelet[2692]: I1105 15:00:44.367666 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "db52dd67-f4bc-4c78-bcc3-795a86775434" (UID: "db52dd67-f4bc-4c78-bcc3-795a86775434"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:00:44.367916 kubelet[2692]: I1105 15:00:44.367736 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "db52dd67-f4bc-4c78-bcc3-795a86775434" (UID: "db52dd67-f4bc-4c78-bcc3-795a86775434"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:00:44.367916 kubelet[2692]: I1105 15:00:44.367753 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "db52dd67-f4bc-4c78-bcc3-795a86775434" (UID: "db52dd67-f4bc-4c78-bcc3-795a86775434"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:00:44.371098 kubelet[2692]: I1105 15:00:44.370816 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "db52dd67-f4bc-4c78-bcc3-795a86775434" (UID: "db52dd67-f4bc-4c78-bcc3-795a86775434"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:00:44.371098 kubelet[2692]: I1105 15:00:44.370869 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "db52dd67-f4bc-4c78-bcc3-795a86775434" (UID: "db52dd67-f4bc-4c78-bcc3-795a86775434"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:00:44.371098 kubelet[2692]: I1105 15:00:44.370891 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "db52dd67-f4bc-4c78-bcc3-795a86775434" (UID: "db52dd67-f4bc-4c78-bcc3-795a86775434"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:00:44.371098 kubelet[2692]: I1105 15:00:44.370910 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-cni-path" (OuterVolumeSpecName: "cni-path") pod "db52dd67-f4bc-4c78-bcc3-795a86775434" (UID: "db52dd67-f4bc-4c78-bcc3-795a86775434"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:00:44.375530 kubelet[2692]: I1105 15:00:44.375489 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "db52dd67-f4bc-4c78-bcc3-795a86775434" (UID: "db52dd67-f4bc-4c78-bcc3-795a86775434"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 5 15:00:44.376349 kubelet[2692]: I1105 15:00:44.376315 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db52dd67-f4bc-4c78-bcc3-795a86775434-kube-api-access-hlnt8" (OuterVolumeSpecName: "kube-api-access-hlnt8") pod "db52dd67-f4bc-4c78-bcc3-795a86775434" (UID: "db52dd67-f4bc-4c78-bcc3-795a86775434"). InnerVolumeSpecName "kube-api-access-hlnt8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:00:44.377653 kubelet[2692]: I1105 15:00:44.377578 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0ae31ee7-9542-4fac-9bc4-569bb9f3010f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0ae31ee7-9542-4fac-9bc4-569bb9f3010f" (UID: "0ae31ee7-9542-4fac-9bc4-569bb9f3010f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:00:44.377820 kubelet[2692]: I1105 15:00:44.377791 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db52dd67-f4bc-4c78-bcc3-795a86775434-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "db52dd67-f4bc-4c78-bcc3-795a86775434" (UID: "db52dd67-f4bc-4c78-bcc3-795a86775434"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:00:44.378266 kubelet[2692]: I1105 15:00:44.378220 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/db52dd67-f4bc-4c78-bcc3-795a86775434-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "db52dd67-f4bc-4c78-bcc3-795a86775434" (UID: "db52dd67-f4bc-4c78-bcc3-795a86775434"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 5 15:00:44.378761 kubelet[2692]: I1105 15:00:44.378714 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/db52dd67-f4bc-4c78-bcc3-795a86775434-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "db52dd67-f4bc-4c78-bcc3-795a86775434" (UID: "db52dd67-f4bc-4c78-bcc3-795a86775434"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 5 15:00:44.379157 kubelet[2692]: I1105 15:00:44.379131 2692 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0ae31ee7-9542-4fac-9bc4-569bb9f3010f-kube-api-access-mpbd9" (OuterVolumeSpecName: "kube-api-access-mpbd9") pod "0ae31ee7-9542-4fac-9bc4-569bb9f3010f" (UID: "0ae31ee7-9542-4fac-9bc4-569bb9f3010f"). InnerVolumeSpecName "kube-api-access-mpbd9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 5 15:00:44.413786 kubelet[2692]: E1105 15:00:44.413654 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:44.465812 kubelet[2692]: I1105 15:00:44.465758 2692 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/db52dd67-f4bc-4c78-bcc3-795a86775434-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.465812 kubelet[2692]: I1105 15:00:44.465798 2692 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.465812 kubelet[2692]: I1105 15:00:44.465809 2692 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.465812 kubelet[2692]: I1105 15:00:44.465817 2692 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.465812 kubelet[2692]: I1105 15:00:44.465825 2692 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.465812 kubelet[2692]: I1105 15:00:44.465833 2692 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.466071 kubelet[2692]: I1105 15:00:44.465841 2692 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db52dd67-f4bc-4c78-bcc3-795a86775434-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.466071 kubelet[2692]: I1105 15:00:44.465859 2692 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.466071 kubelet[2692]: I1105 15:00:44.465868 2692 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0ae31ee7-9542-4fac-9bc4-569bb9f3010f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.466071 kubelet[2692]: I1105 15:00:44.465876 2692 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/db52dd67-f4bc-4c78-bcc3-795a86775434-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.466071 kubelet[2692]: I1105 15:00:44.465885 2692 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mpbd9\" (UniqueName: \"kubernetes.io/projected/0ae31ee7-9542-4fac-9bc4-569bb9f3010f-kube-api-access-mpbd9\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.466071 kubelet[2692]: I1105 15:00:44.465896 2692 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.466071 kubelet[2692]: I1105 15:00:44.465905 2692 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.466071 kubelet[2692]: I1105 15:00:44.465913 2692 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hlnt8\" (UniqueName: \"kubernetes.io/projected/db52dd67-f4bc-4c78-bcc3-795a86775434-kube-api-access-hlnt8\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.466249 kubelet[2692]: I1105 15:00:44.465922 2692 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.466249 kubelet[2692]: I1105 15:00:44.465935 2692 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/db52dd67-f4bc-4c78-bcc3-795a86775434-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 5 15:00:44.650578 kubelet[2692]: I1105 15:00:44.650438 2692 scope.go:117] "RemoveContainer" containerID="c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917" Nov 5 15:00:44.652861 containerd[1563]: time="2025-11-05T15:00:44.652820480Z" level=info msg="RemoveContainer for \"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\"" Nov 5 15:00:44.653474 systemd[1]: Removed slice kubepods-besteffort-pod0ae31ee7_9542_4fac_9bc4_569bb9f3010f.slice - libcontainer container kubepods-besteffort-pod0ae31ee7_9542_4fac_9bc4_569bb9f3010f.slice. Nov 5 15:00:44.660648 systemd[1]: Removed slice kubepods-burstable-poddb52dd67_f4bc_4c78_bcc3_795a86775434.slice - libcontainer container kubepods-burstable-poddb52dd67_f4bc_4c78_bcc3_795a86775434.slice. Nov 5 15:00:44.660849 systemd[1]: kubepods-burstable-poddb52dd67_f4bc_4c78_bcc3_795a86775434.slice: Consumed 6.475s CPU time, 125.2M memory peak, 6.1M read from disk, 12.9M written to disk. Nov 5 15:00:44.670354 containerd[1563]: time="2025-11-05T15:00:44.670238042Z" level=info msg="RemoveContainer for \"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\" returns successfully" Nov 5 15:00:44.672361 kubelet[2692]: I1105 15:00:44.672331 2692 scope.go:117] "RemoveContainer" containerID="c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917" Nov 5 15:00:44.673247 containerd[1563]: time="2025-11-05T15:00:44.673198648Z" level=error msg="ContainerStatus for \"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\": not found" Nov 5 15:00:44.675278 kubelet[2692]: E1105 15:00:44.675170 2692 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\": not found" containerID="c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917" Nov 5 15:00:44.682934 kubelet[2692]: I1105 15:00:44.682799 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917"} err="failed to get container status \"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5b515c82a3587861c83ca28a86edf089210f7b80c492a931a02a9364ceed917\": not found" Nov 5 15:00:44.683757 kubelet[2692]: I1105 15:00:44.683671 2692 scope.go:117] "RemoveContainer" containerID="1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41" Nov 5 15:00:44.686298 containerd[1563]: time="2025-11-05T15:00:44.686263279Z" level=info msg="RemoveContainer for \"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\"" Nov 5 15:00:44.692348 containerd[1563]: time="2025-11-05T15:00:44.692314247Z" level=info msg="RemoveContainer for \"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\" returns successfully" Nov 5 15:00:44.692585 kubelet[2692]: I1105 15:00:44.692552 2692 scope.go:117] "RemoveContainer" containerID="d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d" Nov 5 15:00:44.694375 containerd[1563]: time="2025-11-05T15:00:44.694345116Z" level=info msg="RemoveContainer for \"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d\"" Nov 5 15:00:44.703701 containerd[1563]: time="2025-11-05T15:00:44.703605283Z" level=info msg="RemoveContainer for \"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d\" returns successfully" Nov 5 15:00:44.703916 kubelet[2692]: I1105 15:00:44.703874 2692 scope.go:117] "RemoveContainer" containerID="e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1" Nov 5 15:00:44.706052 containerd[1563]: time="2025-11-05T15:00:44.706026582Z" level=info msg="RemoveContainer for \"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1\"" Nov 5 15:00:44.709633 containerd[1563]: time="2025-11-05T15:00:44.709592972Z" level=info msg="RemoveContainer for \"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1\" returns successfully" Nov 5 15:00:44.709830 kubelet[2692]: I1105 15:00:44.709808 2692 scope.go:117] "RemoveContainer" containerID="47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05" Nov 5 15:00:44.711622 containerd[1563]: time="2025-11-05T15:00:44.711593682Z" level=info msg="RemoveContainer for \"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05\"" Nov 5 15:00:44.720132 containerd[1563]: time="2025-11-05T15:00:44.720094548Z" level=info msg="RemoveContainer for \"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05\" returns successfully" Nov 5 15:00:44.720400 kubelet[2692]: I1105 15:00:44.720336 2692 scope.go:117] "RemoveContainer" containerID="14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4" Nov 5 15:00:44.722094 containerd[1563]: time="2025-11-05T15:00:44.722063739Z" level=info msg="RemoveContainer for \"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4\"" Nov 5 15:00:44.725323 containerd[1563]: time="2025-11-05T15:00:44.725276698Z" level=info msg="RemoveContainer for \"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4\" returns successfully" Nov 5 15:00:44.728849 kubelet[2692]: I1105 15:00:44.728773 2692 scope.go:117] "RemoveContainer" containerID="1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41" Nov 5 15:00:44.729304 containerd[1563]: time="2025-11-05T15:00:44.729252198Z" level=error msg="ContainerStatus for \"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\": not found" Nov 5 15:00:44.729525 kubelet[2692]: E1105 15:00:44.729467 2692 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\": not found" containerID="1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41" Nov 5 15:00:44.729525 kubelet[2692]: I1105 15:00:44.729500 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41"} err="failed to get container status \"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f59fc0b4ba686361c08e2e91d0e6d803c1bb0d31edd76ea5a856a651b9a9b41\": not found" Nov 5 15:00:44.729698 kubelet[2692]: I1105 15:00:44.729635 2692 scope.go:117] "RemoveContainer" containerID="d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d" Nov 5 15:00:44.729903 containerd[1563]: time="2025-11-05T15:00:44.729874502Z" level=error msg="ContainerStatus for \"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d\": not found" Nov 5 15:00:44.730033 kubelet[2692]: E1105 15:00:44.730013 2692 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d\": not found" containerID="d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d" Nov 5 15:00:44.730178 kubelet[2692]: I1105 15:00:44.730115 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d"} err="failed to get container status \"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3c53544ad9c619df438a160918be8a5d131f601a372a45ae58750c074afb86d\": not found" Nov 5 15:00:44.730178 kubelet[2692]: I1105 15:00:44.730137 2692 scope.go:117] "RemoveContainer" containerID="e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1" Nov 5 15:00:44.730516 containerd[1563]: time="2025-11-05T15:00:44.730485847Z" level=error msg="ContainerStatus for \"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1\": not found" Nov 5 15:00:44.730755 kubelet[2692]: E1105 15:00:44.730650 2692 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1\": not found" containerID="e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1" Nov 5 15:00:44.730755 kubelet[2692]: I1105 15:00:44.730711 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1"} err="failed to get container status \"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"e2d78512f129ba64e55947f08cbe5dd378e25644bfa5f8aecca880ff100084d1\": not found" Nov 5 15:00:44.730755 kubelet[2692]: I1105 15:00:44.730729 2692 scope.go:117] "RemoveContainer" containerID="47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05" Nov 5 15:00:44.731074 containerd[1563]: time="2025-11-05T15:00:44.731041313Z" level=error msg="ContainerStatus for \"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05\": not found" Nov 5 15:00:44.731188 kubelet[2692]: E1105 15:00:44.731165 2692 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05\": not found" containerID="47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05" Nov 5 15:00:44.731228 kubelet[2692]: I1105 15:00:44.731192 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05"} err="failed to get container status \"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05\": rpc error: code = NotFound desc = an error occurred when try to find container \"47569d62ded7f3d018bfa1c0a33cb06157cd3cd7cfd64fb314bf3999ce9e9e05\": not found" Nov 5 15:00:44.731228 kubelet[2692]: I1105 15:00:44.731212 2692 scope.go:117] "RemoveContainer" containerID="14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4" Nov 5 15:00:44.731434 containerd[1563]: time="2025-11-05T15:00:44.731403184Z" level=error msg="ContainerStatus for \"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4\": not found" Nov 5 15:00:44.731616 kubelet[2692]: E1105 15:00:44.731598 2692 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4\": not found" containerID="14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4" Nov 5 15:00:44.731735 kubelet[2692]: I1105 15:00:44.731712 2692 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4"} err="failed to get container status \"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4\": rpc error: code = NotFound desc = an error occurred when try to find container \"14d18e1026f006ff29c8c4a3bb1f0250bc64df625421d107380db47ced6690c4\": not found" Nov 5 15:00:44.946994 systemd[1]: var-lib-kubelet-pods-db52dd67\x2df4bc\x2d4c78\x2dbcc3\x2d795a86775434-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhlnt8.mount: Deactivated successfully. Nov 5 15:00:44.947106 systemd[1]: var-lib-kubelet-pods-0ae31ee7\x2d9542\x2d4fac\x2d9bc4\x2d569bb9f3010f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmpbd9.mount: Deactivated successfully. Nov 5 15:00:44.947165 systemd[1]: var-lib-kubelet-pods-db52dd67\x2df4bc\x2d4c78\x2dbcc3\x2d795a86775434-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 5 15:00:44.947225 systemd[1]: var-lib-kubelet-pods-db52dd67\x2df4bc\x2d4c78\x2dbcc3\x2d795a86775434-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 5 15:00:45.416715 kubelet[2692]: I1105 15:00:45.416454 2692 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0ae31ee7-9542-4fac-9bc4-569bb9f3010f" path="/var/lib/kubelet/pods/0ae31ee7-9542-4fac-9bc4-569bb9f3010f/volumes" Nov 5 15:00:45.417506 kubelet[2692]: I1105 15:00:45.417316 2692 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db52dd67-f4bc-4c78-bcc3-795a86775434" path="/var/lib/kubelet/pods/db52dd67-f4bc-4c78-bcc3-795a86775434/volumes" Nov 5 15:00:45.824592 sshd[4365]: Connection closed by 10.0.0.1 port 45312 Nov 5 15:00:45.825141 sshd-session[4362]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:45.839502 systemd[1]: sshd@25-10.0.0.22:22-10.0.0.1:45312.service: Deactivated successfully. Nov 5 15:00:45.841467 systemd[1]: session-26.scope: Deactivated successfully. Nov 5 15:00:45.842600 systemd[1]: session-26.scope: Consumed 1.022s CPU time, 26.2M memory peak. Nov 5 15:00:45.843868 systemd-logind[1544]: Session 26 logged out. Waiting for processes to exit. Nov 5 15:00:45.846488 systemd[1]: Started sshd@26-10.0.0.22:22-10.0.0.1:45320.service - OpenSSH per-connection server daemon (10.0.0.1:45320). Nov 5 15:00:45.848506 systemd-logind[1544]: Removed session 26. Nov 5 15:00:45.908316 sshd[4517]: Accepted publickey for core from 10.0.0.1 port 45320 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:45.909551 sshd-session[4517]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:45.915519 systemd-logind[1544]: New session 27 of user core. Nov 5 15:00:45.927901 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 5 15:00:47.134825 sshd[4520]: Connection closed by 10.0.0.1 port 45320 Nov 5 15:00:47.136369 sshd-session[4517]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:47.146481 systemd[1]: sshd@26-10.0.0.22:22-10.0.0.1:45320.service: Deactivated successfully. Nov 5 15:00:47.148834 systemd[1]: session-27.scope: Deactivated successfully. Nov 5 15:00:47.149216 systemd[1]: session-27.scope: Consumed 1.086s CPU time, 26.3M memory peak. Nov 5 15:00:47.150993 systemd-logind[1544]: Session 27 logged out. Waiting for processes to exit. Nov 5 15:00:47.155814 kubelet[2692]: I1105 15:00:47.155733 2692 memory_manager.go:355] "RemoveStaleState removing state" podUID="db52dd67-f4bc-4c78-bcc3-795a86775434" containerName="cilium-agent" Nov 5 15:00:47.155814 kubelet[2692]: I1105 15:00:47.155768 2692 memory_manager.go:355] "RemoveStaleState removing state" podUID="0ae31ee7-9542-4fac-9bc4-569bb9f3010f" containerName="cilium-operator" Nov 5 15:00:47.158452 systemd[1]: Started sshd@27-10.0.0.22:22-10.0.0.1:45328.service - OpenSSH per-connection server daemon (10.0.0.1:45328). Nov 5 15:00:47.162209 systemd-logind[1544]: Removed session 27. Nov 5 15:00:47.179296 systemd[1]: Created slice kubepods-burstable-poded3c8f8d_332c_4b78_9ff8_372013e88538.slice - libcontainer container kubepods-burstable-poded3c8f8d_332c_4b78_9ff8_372013e88538.slice. Nov 5 15:00:47.232489 sshd[4532]: Accepted publickey for core from 10.0.0.1 port 45328 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:47.233834 sshd-session[4532]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:47.239743 systemd-logind[1544]: New session 28 of user core. Nov 5 15:00:47.256983 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 5 15:00:47.282374 kubelet[2692]: I1105 15:00:47.282315 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ed3c8f8d-332c-4b78-9ff8-372013e88538-clustermesh-secrets\") pod \"cilium-r4hb6\" (UID: \"ed3c8f8d-332c-4b78-9ff8-372013e88538\") " pod="kube-system/cilium-r4hb6" Nov 5 15:00:47.282374 kubelet[2692]: I1105 15:00:47.282369 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ed3c8f8d-332c-4b78-9ff8-372013e88538-host-proc-sys-kernel\") pod \"cilium-r4hb6\" (UID: \"ed3c8f8d-332c-4b78-9ff8-372013e88538\") " pod="kube-system/cilium-r4hb6" Nov 5 15:00:47.282526 kubelet[2692]: I1105 15:00:47.282392 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ed3c8f8d-332c-4b78-9ff8-372013e88538-cni-path\") pod \"cilium-r4hb6\" (UID: \"ed3c8f8d-332c-4b78-9ff8-372013e88538\") " pod="kube-system/cilium-r4hb6" Nov 5 15:00:47.282526 kubelet[2692]: I1105 15:00:47.282412 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ed3c8f8d-332c-4b78-9ff8-372013e88538-etc-cni-netd\") pod \"cilium-r4hb6\" (UID: \"ed3c8f8d-332c-4b78-9ff8-372013e88538\") " pod="kube-system/cilium-r4hb6" Nov 5 15:00:47.282526 kubelet[2692]: I1105 15:00:47.282430 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ed3c8f8d-332c-4b78-9ff8-372013e88538-cilium-ipsec-secrets\") pod \"cilium-r4hb6\" (UID: \"ed3c8f8d-332c-4b78-9ff8-372013e88538\") " pod="kube-system/cilium-r4hb6" Nov 5 15:00:47.282526 kubelet[2692]: I1105 15:00:47.282447 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ed3c8f8d-332c-4b78-9ff8-372013e88538-hostproc\") pod \"cilium-r4hb6\" (UID: \"ed3c8f8d-332c-4b78-9ff8-372013e88538\") " pod="kube-system/cilium-r4hb6" Nov 5 15:00:47.282526 kubelet[2692]: I1105 15:00:47.282463 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ed3c8f8d-332c-4b78-9ff8-372013e88538-lib-modules\") pod \"cilium-r4hb6\" (UID: \"ed3c8f8d-332c-4b78-9ff8-372013e88538\") " pod="kube-system/cilium-r4hb6" Nov 5 15:00:47.282526 kubelet[2692]: I1105 15:00:47.282483 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kwk2\" (UniqueName: \"kubernetes.io/projected/ed3c8f8d-332c-4b78-9ff8-372013e88538-kube-api-access-2kwk2\") pod \"cilium-r4hb6\" (UID: \"ed3c8f8d-332c-4b78-9ff8-372013e88538\") " pod="kube-system/cilium-r4hb6" Nov 5 15:00:47.282653 kubelet[2692]: I1105 15:00:47.282522 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ed3c8f8d-332c-4b78-9ff8-372013e88538-xtables-lock\") pod \"cilium-r4hb6\" (UID: \"ed3c8f8d-332c-4b78-9ff8-372013e88538\") " pod="kube-system/cilium-r4hb6" Nov 5 15:00:47.282653 kubelet[2692]: I1105 15:00:47.282543 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ed3c8f8d-332c-4b78-9ff8-372013e88538-cilium-config-path\") pod \"cilium-r4hb6\" (UID: \"ed3c8f8d-332c-4b78-9ff8-372013e88538\") " pod="kube-system/cilium-r4hb6" Nov 5 15:00:47.282653 kubelet[2692]: I1105 15:00:47.282617 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ed3c8f8d-332c-4b78-9ff8-372013e88538-bpf-maps\") pod \"cilium-r4hb6\" (UID: \"ed3c8f8d-332c-4b78-9ff8-372013e88538\") " pod="kube-system/cilium-r4hb6" Nov 5 15:00:47.282772 kubelet[2692]: I1105 15:00:47.282658 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ed3c8f8d-332c-4b78-9ff8-372013e88538-host-proc-sys-net\") pod \"cilium-r4hb6\" (UID: \"ed3c8f8d-332c-4b78-9ff8-372013e88538\") " pod="kube-system/cilium-r4hb6" Nov 5 15:00:47.282772 kubelet[2692]: I1105 15:00:47.282754 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ed3c8f8d-332c-4b78-9ff8-372013e88538-cilium-cgroup\") pod \"cilium-r4hb6\" (UID: \"ed3c8f8d-332c-4b78-9ff8-372013e88538\") " pod="kube-system/cilium-r4hb6" Nov 5 15:00:47.282818 kubelet[2692]: I1105 15:00:47.282786 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ed3c8f8d-332c-4b78-9ff8-372013e88538-hubble-tls\") pod \"cilium-r4hb6\" (UID: \"ed3c8f8d-332c-4b78-9ff8-372013e88538\") " pod="kube-system/cilium-r4hb6" Nov 5 15:00:47.282818 kubelet[2692]: I1105 15:00:47.282810 2692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ed3c8f8d-332c-4b78-9ff8-372013e88538-cilium-run\") pod \"cilium-r4hb6\" (UID: \"ed3c8f8d-332c-4b78-9ff8-372013e88538\") " pod="kube-system/cilium-r4hb6" Nov 5 15:00:47.307681 sshd[4535]: Connection closed by 10.0.0.1 port 45328 Nov 5 15:00:47.308126 sshd-session[4532]: pam_unix(sshd:session): session closed for user core Nov 5 15:00:47.321045 systemd[1]: sshd@27-10.0.0.22:22-10.0.0.1:45328.service: Deactivated successfully. Nov 5 15:00:47.324119 systemd[1]: session-28.scope: Deactivated successfully. Nov 5 15:00:47.326478 systemd-logind[1544]: Session 28 logged out. Waiting for processes to exit. Nov 5 15:00:47.329478 systemd[1]: Started sshd@28-10.0.0.22:22-10.0.0.1:45342.service - OpenSSH per-connection server daemon (10.0.0.1:45342). Nov 5 15:00:47.330173 systemd-logind[1544]: Removed session 28. Nov 5 15:00:47.395779 sshd[4542]: Accepted publickey for core from 10.0.0.1 port 45342 ssh2: RSA SHA256:nM3EkzhYnY1k7HKfBVIgLIVO2VgoKZbQ4dF/3C6QndI Nov 5 15:00:47.399045 sshd-session[4542]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 5 15:00:47.409077 systemd-logind[1544]: New session 29 of user core. Nov 5 15:00:47.414932 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 5 15:00:47.464245 kubelet[2692]: E1105 15:00:47.464192 2692 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 5 15:00:47.483626 kubelet[2692]: E1105 15:00:47.483576 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:47.484548 containerd[1563]: time="2025-11-05T15:00:47.484496595Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r4hb6,Uid:ed3c8f8d-332c-4b78-9ff8-372013e88538,Namespace:kube-system,Attempt:0,}" Nov 5 15:00:47.510270 containerd[1563]: time="2025-11-05T15:00:47.510200689Z" level=info msg="connecting to shim 4ac72449d40d4d80a00a5c3af008eff7c762b8cd1f97bb4a85571744a4b4d72d" address="unix:///run/containerd/s/381bc6fad116f30b7c7111e38acd279701c6df91f926d1a5e575a97832f79063" namespace=k8s.io protocol=ttrpc version=3 Nov 5 15:00:47.538024 systemd[1]: Started cri-containerd-4ac72449d40d4d80a00a5c3af008eff7c762b8cd1f97bb4a85571744a4b4d72d.scope - libcontainer container 4ac72449d40d4d80a00a5c3af008eff7c762b8cd1f97bb4a85571744a4b4d72d. Nov 5 15:00:47.566077 containerd[1563]: time="2025-11-05T15:00:47.566037063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r4hb6,Uid:ed3c8f8d-332c-4b78-9ff8-372013e88538,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ac72449d40d4d80a00a5c3af008eff7c762b8cd1f97bb4a85571744a4b4d72d\"" Nov 5 15:00:47.567316 kubelet[2692]: E1105 15:00:47.567292 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:47.571188 containerd[1563]: time="2025-11-05T15:00:47.571121275Z" level=info msg="CreateContainer within sandbox \"4ac72449d40d4d80a00a5c3af008eff7c762b8cd1f97bb4a85571744a4b4d72d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 5 15:00:47.579472 containerd[1563]: time="2025-11-05T15:00:47.579423339Z" level=info msg="Container 051a58001161d15422b1cfd910e21f94567125d52098ae896cd9555c0d43ba56: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:00:47.585131 containerd[1563]: time="2025-11-05T15:00:47.585091658Z" level=info msg="CreateContainer within sandbox \"4ac72449d40d4d80a00a5c3af008eff7c762b8cd1f97bb4a85571744a4b4d72d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"051a58001161d15422b1cfd910e21f94567125d52098ae896cd9555c0d43ba56\"" Nov 5 15:00:47.585619 containerd[1563]: time="2025-11-05T15:00:47.585586528Z" level=info msg="StartContainer for \"051a58001161d15422b1cfd910e21f94567125d52098ae896cd9555c0d43ba56\"" Nov 5 15:00:47.586727 containerd[1563]: time="2025-11-05T15:00:47.586696384Z" level=info msg="connecting to shim 051a58001161d15422b1cfd910e21f94567125d52098ae896cd9555c0d43ba56" address="unix:///run/containerd/s/381bc6fad116f30b7c7111e38acd279701c6df91f926d1a5e575a97832f79063" protocol=ttrpc version=3 Nov 5 15:00:47.610219 systemd[1]: Started cri-containerd-051a58001161d15422b1cfd910e21f94567125d52098ae896cd9555c0d43ba56.scope - libcontainer container 051a58001161d15422b1cfd910e21f94567125d52098ae896cd9555c0d43ba56. Nov 5 15:00:47.641037 containerd[1563]: time="2025-11-05T15:00:47.641000711Z" level=info msg="StartContainer for \"051a58001161d15422b1cfd910e21f94567125d52098ae896cd9555c0d43ba56\" returns successfully" Nov 5 15:00:47.649429 systemd[1]: cri-containerd-051a58001161d15422b1cfd910e21f94567125d52098ae896cd9555c0d43ba56.scope: Deactivated successfully. Nov 5 15:00:47.652217 containerd[1563]: time="2025-11-05T15:00:47.652185194Z" level=info msg="TaskExit event in podsandbox handler container_id:\"051a58001161d15422b1cfd910e21f94567125d52098ae896cd9555c0d43ba56\" id:\"051a58001161d15422b1cfd910e21f94567125d52098ae896cd9555c0d43ba56\" pid:4617 exited_at:{seconds:1762354847 nanos:651839561}" Nov 5 15:00:47.652324 containerd[1563]: time="2025-11-05T15:00:47.652275992Z" level=info msg="received exit event container_id:\"051a58001161d15422b1cfd910e21f94567125d52098ae896cd9555c0d43ba56\" id:\"051a58001161d15422b1cfd910e21f94567125d52098ae896cd9555c0d43ba56\" pid:4617 exited_at:{seconds:1762354847 nanos:651839561}" Nov 5 15:00:47.666906 kubelet[2692]: E1105 15:00:47.666877 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:48.670123 kubelet[2692]: E1105 15:00:48.670037 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:48.672877 containerd[1563]: time="2025-11-05T15:00:48.672806698Z" level=info msg="CreateContainer within sandbox \"4ac72449d40d4d80a00a5c3af008eff7c762b8cd1f97bb4a85571744a4b4d72d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 5 15:00:48.689356 containerd[1563]: time="2025-11-05T15:00:48.689132092Z" level=info msg="Container 9944c36ab9583444cef6ebae8c416b582e18e8bcf4d68953027f2fbf3193c590: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:00:48.694731 containerd[1563]: time="2025-11-05T15:00:48.694615302Z" level=info msg="CreateContainer within sandbox \"4ac72449d40d4d80a00a5c3af008eff7c762b8cd1f97bb4a85571744a4b4d72d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"9944c36ab9583444cef6ebae8c416b582e18e8bcf4d68953027f2fbf3193c590\"" Nov 5 15:00:48.695188 containerd[1563]: time="2025-11-05T15:00:48.695165011Z" level=info msg="StartContainer for \"9944c36ab9583444cef6ebae8c416b582e18e8bcf4d68953027f2fbf3193c590\"" Nov 5 15:00:48.696364 containerd[1563]: time="2025-11-05T15:00:48.696269589Z" level=info msg="connecting to shim 9944c36ab9583444cef6ebae8c416b582e18e8bcf4d68953027f2fbf3193c590" address="unix:///run/containerd/s/381bc6fad116f30b7c7111e38acd279701c6df91f926d1a5e575a97832f79063" protocol=ttrpc version=3 Nov 5 15:00:48.715832 systemd[1]: Started cri-containerd-9944c36ab9583444cef6ebae8c416b582e18e8bcf4d68953027f2fbf3193c590.scope - libcontainer container 9944c36ab9583444cef6ebae8c416b582e18e8bcf4d68953027f2fbf3193c590. Nov 5 15:00:48.740778 containerd[1563]: time="2025-11-05T15:00:48.740722019Z" level=info msg="StartContainer for \"9944c36ab9583444cef6ebae8c416b582e18e8bcf4d68953027f2fbf3193c590\" returns successfully" Nov 5 15:00:48.745005 systemd[1]: cri-containerd-9944c36ab9583444cef6ebae8c416b582e18e8bcf4d68953027f2fbf3193c590.scope: Deactivated successfully. Nov 5 15:00:48.745862 containerd[1563]: time="2025-11-05T15:00:48.745524883Z" level=info msg="received exit event container_id:\"9944c36ab9583444cef6ebae8c416b582e18e8bcf4d68953027f2fbf3193c590\" id:\"9944c36ab9583444cef6ebae8c416b582e18e8bcf4d68953027f2fbf3193c590\" pid:4662 exited_at:{seconds:1762354848 nanos:745276208}" Nov 5 15:00:48.746199 containerd[1563]: time="2025-11-05T15:00:48.746094992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9944c36ab9583444cef6ebae8c416b582e18e8bcf4d68953027f2fbf3193c590\" id:\"9944c36ab9583444cef6ebae8c416b582e18e8bcf4d68953027f2fbf3193c590\" pid:4662 exited_at:{seconds:1762354848 nanos:745276208}" Nov 5 15:00:49.675868 kubelet[2692]: E1105 15:00:49.674546 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:49.679259 containerd[1563]: time="2025-11-05T15:00:49.679144843Z" level=info msg="CreateContainer within sandbox \"4ac72449d40d4d80a00a5c3af008eff7c762b8cd1f97bb4a85571744a4b4d72d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 5 15:00:49.829733 containerd[1563]: time="2025-11-05T15:00:49.829607571Z" level=info msg="Container cc0f40b41480f3523c192d56ca89408bf4526068b34c8772e26825a376f7792e: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:00:49.834148 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3954005684.mount: Deactivated successfully. Nov 5 15:00:49.849089 containerd[1563]: time="2025-11-05T15:00:49.849024165Z" level=info msg="CreateContainer within sandbox \"4ac72449d40d4d80a00a5c3af008eff7c762b8cd1f97bb4a85571744a4b4d72d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cc0f40b41480f3523c192d56ca89408bf4526068b34c8772e26825a376f7792e\"" Nov 5 15:00:49.850064 containerd[1563]: time="2025-11-05T15:00:49.850007827Z" level=info msg="StartContainer for \"cc0f40b41480f3523c192d56ca89408bf4526068b34c8772e26825a376f7792e\"" Nov 5 15:00:49.852768 containerd[1563]: time="2025-11-05T15:00:49.852732375Z" level=info msg="connecting to shim cc0f40b41480f3523c192d56ca89408bf4526068b34c8772e26825a376f7792e" address="unix:///run/containerd/s/381bc6fad116f30b7c7111e38acd279701c6df91f926d1a5e575a97832f79063" protocol=ttrpc version=3 Nov 5 15:00:49.873939 systemd[1]: Started cri-containerd-cc0f40b41480f3523c192d56ca89408bf4526068b34c8772e26825a376f7792e.scope - libcontainer container cc0f40b41480f3523c192d56ca89408bf4526068b34c8772e26825a376f7792e. Nov 5 15:00:49.911992 systemd[1]: cri-containerd-cc0f40b41480f3523c192d56ca89408bf4526068b34c8772e26825a376f7792e.scope: Deactivated successfully. Nov 5 15:00:49.912346 kubelet[2692]: I1105 15:00:49.910921 2692 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-05T15:00:49Z","lastTransitionTime":"2025-11-05T15:00:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 5 15:00:49.913870 containerd[1563]: time="2025-11-05T15:00:49.913829665Z" level=info msg="received exit event container_id:\"cc0f40b41480f3523c192d56ca89408bf4526068b34c8772e26825a376f7792e\" id:\"cc0f40b41480f3523c192d56ca89408bf4526068b34c8772e26825a376f7792e\" pid:4706 exited_at:{seconds:1762354849 nanos:913368394}" Nov 5 15:00:49.914373 containerd[1563]: time="2025-11-05T15:00:49.914324256Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc0f40b41480f3523c192d56ca89408bf4526068b34c8772e26825a376f7792e\" id:\"cc0f40b41480f3523c192d56ca89408bf4526068b34c8772e26825a376f7792e\" pid:4706 exited_at:{seconds:1762354849 nanos:913368394}" Nov 5 15:00:49.933826 containerd[1563]: time="2025-11-05T15:00:49.933593613Z" level=info msg="StartContainer for \"cc0f40b41480f3523c192d56ca89408bf4526068b34c8772e26825a376f7792e\" returns successfully" Nov 5 15:00:49.941900 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cc0f40b41480f3523c192d56ca89408bf4526068b34c8772e26825a376f7792e-rootfs.mount: Deactivated successfully. Nov 5 15:00:50.681485 kubelet[2692]: E1105 15:00:50.680404 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:50.685722 containerd[1563]: time="2025-11-05T15:00:50.684318334Z" level=info msg="CreateContainer within sandbox \"4ac72449d40d4d80a00a5c3af008eff7c762b8cd1f97bb4a85571744a4b4d72d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 5 15:00:50.694955 containerd[1563]: time="2025-11-05T15:00:50.694915266Z" level=info msg="Container f8a82b38ba8ba9853aa4cf5e3fc87ebc963db7cc5746446efeb7cef9d1299883: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:00:50.710965 containerd[1563]: time="2025-11-05T15:00:50.710920984Z" level=info msg="CreateContainer within sandbox \"4ac72449d40d4d80a00a5c3af008eff7c762b8cd1f97bb4a85571744a4b4d72d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f8a82b38ba8ba9853aa4cf5e3fc87ebc963db7cc5746446efeb7cef9d1299883\"" Nov 5 15:00:50.711769 containerd[1563]: time="2025-11-05T15:00:50.711724849Z" level=info msg="StartContainer for \"f8a82b38ba8ba9853aa4cf5e3fc87ebc963db7cc5746446efeb7cef9d1299883\"" Nov 5 15:00:50.712708 containerd[1563]: time="2025-11-05T15:00:50.712651873Z" level=info msg="connecting to shim f8a82b38ba8ba9853aa4cf5e3fc87ebc963db7cc5746446efeb7cef9d1299883" address="unix:///run/containerd/s/381bc6fad116f30b7c7111e38acd279701c6df91f926d1a5e575a97832f79063" protocol=ttrpc version=3 Nov 5 15:00:50.734915 systemd[1]: Started cri-containerd-f8a82b38ba8ba9853aa4cf5e3fc87ebc963db7cc5746446efeb7cef9d1299883.scope - libcontainer container f8a82b38ba8ba9853aa4cf5e3fc87ebc963db7cc5746446efeb7cef9d1299883. Nov 5 15:00:50.759831 systemd[1]: cri-containerd-f8a82b38ba8ba9853aa4cf5e3fc87ebc963db7cc5746446efeb7cef9d1299883.scope: Deactivated successfully. Nov 5 15:00:50.762334 containerd[1563]: time="2025-11-05T15:00:50.762278596Z" level=info msg="received exit event container_id:\"f8a82b38ba8ba9853aa4cf5e3fc87ebc963db7cc5746446efeb7cef9d1299883\" id:\"f8a82b38ba8ba9853aa4cf5e3fc87ebc963db7cc5746446efeb7cef9d1299883\" pid:4745 exited_at:{seconds:1762354850 nanos:761564048}" Nov 5 15:00:50.764193 containerd[1563]: time="2025-11-05T15:00:50.763183020Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f8a82b38ba8ba9853aa4cf5e3fc87ebc963db7cc5746446efeb7cef9d1299883\" id:\"f8a82b38ba8ba9853aa4cf5e3fc87ebc963db7cc5746446efeb7cef9d1299883\" pid:4745 exited_at:{seconds:1762354850 nanos:761564048}" Nov 5 15:00:50.772661 containerd[1563]: time="2025-11-05T15:00:50.772617213Z" level=info msg="StartContainer for \"f8a82b38ba8ba9853aa4cf5e3fc87ebc963db7cc5746446efeb7cef9d1299883\" returns successfully" Nov 5 15:00:50.787151 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8a82b38ba8ba9853aa4cf5e3fc87ebc963db7cc5746446efeb7cef9d1299883-rootfs.mount: Deactivated successfully. Nov 5 15:00:51.688895 kubelet[2692]: E1105 15:00:51.688843 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:51.691711 containerd[1563]: time="2025-11-05T15:00:51.691583535Z" level=info msg="CreateContainer within sandbox \"4ac72449d40d4d80a00a5c3af008eff7c762b8cd1f97bb4a85571744a4b4d72d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 5 15:00:51.714314 containerd[1563]: time="2025-11-05T15:00:51.713994444Z" level=info msg="Container e8db19b4dbbfbd23575be7c97ad6f5e4173e5c766f9023a11a7be1f646c31d60: CDI devices from CRI Config.CDIDevices: []" Nov 5 15:00:51.715965 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4186400261.mount: Deactivated successfully. Nov 5 15:00:51.727709 containerd[1563]: time="2025-11-05T15:00:51.727399462Z" level=info msg="CreateContainer within sandbox \"4ac72449d40d4d80a00a5c3af008eff7c762b8cd1f97bb4a85571744a4b4d72d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e8db19b4dbbfbd23575be7c97ad6f5e4173e5c766f9023a11a7be1f646c31d60\"" Nov 5 15:00:51.731711 containerd[1563]: time="2025-11-05T15:00:51.729351070Z" level=info msg="StartContainer for \"e8db19b4dbbfbd23575be7c97ad6f5e4173e5c766f9023a11a7be1f646c31d60\"" Nov 5 15:00:51.732517 containerd[1563]: time="2025-11-05T15:00:51.732481338Z" level=info msg="connecting to shim e8db19b4dbbfbd23575be7c97ad6f5e4173e5c766f9023a11a7be1f646c31d60" address="unix:///run/containerd/s/381bc6fad116f30b7c7111e38acd279701c6df91f926d1a5e575a97832f79063" protocol=ttrpc version=3 Nov 5 15:00:51.768860 systemd[1]: Started cri-containerd-e8db19b4dbbfbd23575be7c97ad6f5e4173e5c766f9023a11a7be1f646c31d60.scope - libcontainer container e8db19b4dbbfbd23575be7c97ad6f5e4173e5c766f9023a11a7be1f646c31d60. Nov 5 15:00:51.796230 containerd[1563]: time="2025-11-05T15:00:51.796186042Z" level=info msg="StartContainer for \"e8db19b4dbbfbd23575be7c97ad6f5e4173e5c766f9023a11a7be1f646c31d60\" returns successfully" Nov 5 15:00:51.847884 containerd[1563]: time="2025-11-05T15:00:51.847835707Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8db19b4dbbfbd23575be7c97ad6f5e4173e5c766f9023a11a7be1f646c31d60\" id:\"01d74415969051d434892302752537680b4816beb7d8b0b167a22cdd051fc345\" pid:4814 exited_at:{seconds:1762354851 nanos:847468553}" Nov 5 15:00:52.069727 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Nov 5 15:00:52.698378 kubelet[2692]: E1105 15:00:52.698288 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:52.717938 kubelet[2692]: I1105 15:00:52.717826 2692 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r4hb6" podStartSLOduration=5.717806948 podStartE2EDuration="5.717806948s" podCreationTimestamp="2025-11-05 15:00:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-05 15:00:52.716846563 +0000 UTC m=+95.399190494" watchObservedRunningTime="2025-11-05 15:00:52.717806948 +0000 UTC m=+95.400150839" Nov 5 15:00:53.702782 kubelet[2692]: E1105 15:00:53.702735 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:53.804930 containerd[1563]: time="2025-11-05T15:00:53.804884310Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8db19b4dbbfbd23575be7c97ad6f5e4173e5c766f9023a11a7be1f646c31d60\" id:\"fb91403188efb7c5d267baaae5da977b6d01c9c211c14ba9324869030a53c000\" pid:4980 exit_status:1 exited_at:{seconds:1762354853 nanos:804555035}" Nov 5 15:00:55.055836 systemd-networkd[1476]: lxc_health: Link UP Nov 5 15:00:55.056065 systemd-networkd[1476]: lxc_health: Gained carrier Nov 5 15:00:55.486275 kubelet[2692]: E1105 15:00:55.485905 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:55.704019 kubelet[2692]: E1105 15:00:55.703947 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:55.939098 containerd[1563]: time="2025-11-05T15:00:55.939052004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8db19b4dbbfbd23575be7c97ad6f5e4173e5c766f9023a11a7be1f646c31d60\" id:\"19b5584d994090d5a956fdebe4538ab061fa68680fc696a7d24e7470237c3912\" pid:5351 exited_at:{seconds:1762354855 nanos:938656649}" Nov 5 15:00:56.203913 systemd-networkd[1476]: lxc_health: Gained IPv6LL Nov 5 15:00:56.705794 kubelet[2692]: E1105 15:00:56.705676 2692 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 5 15:00:58.053436 containerd[1563]: time="2025-11-05T15:00:58.053385038Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8db19b4dbbfbd23575be7c97ad6f5e4173e5c766f9023a11a7be1f646c31d60\" id:\"8a510ff6b63aef71863a8bf3c2b49c847099f1ab6c9628da1bc6209d4e764d50\" pid:5385 exited_at:{seconds:1762354858 nanos:53018161}" Nov 5 15:01:00.161301 containerd[1563]: time="2025-11-05T15:01:00.161248261Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e8db19b4dbbfbd23575be7c97ad6f5e4173e5c766f9023a11a7be1f646c31d60\" id:\"b92c8ff465b3b809c3c3cb333c863b276fd24da876b5b7352ac53122c81a56a8\" pid:5416 exited_at:{seconds:1762354860 nanos:160661026}" Nov 5 15:01:00.165555 sshd[4552]: Connection closed by 10.0.0.1 port 45342 Nov 5 15:01:00.165927 sshd-session[4542]: pam_unix(sshd:session): session closed for user core Nov 5 15:01:00.169720 systemd[1]: sshd@28-10.0.0.22:22-10.0.0.1:45342.service: Deactivated successfully. Nov 5 15:01:00.171450 systemd[1]: session-29.scope: Deactivated successfully. Nov 5 15:01:00.172275 systemd-logind[1544]: Session 29 logged out. Waiting for processes to exit. Nov 5 15:01:00.173506 systemd-logind[1544]: Removed session 29.