Oct 27 23:34:14.883463 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 27 23:34:14.883484 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Mon Oct 27 22:11:36 -00 2025 Oct 27 23:34:14.883494 kernel: KASLR enabled Oct 27 23:34:14.883499 kernel: efi: EFI v2.7 by EDK II Oct 27 23:34:14.883505 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Oct 27 23:34:14.883510 kernel: random: crng init done Oct 27 23:34:14.883517 kernel: secureboot: Secure boot disabled Oct 27 23:34:14.883523 kernel: ACPI: Early table checksum verification disabled Oct 27 23:34:14.883529 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Oct 27 23:34:14.883536 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 27 23:34:14.883542 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:34:14.883548 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:34:14.883576 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:34:14.883585 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:34:14.883592 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:34:14.883600 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:34:14.883607 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:34:14.883613 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:34:14.883619 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 27 23:34:14.883625 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 27 23:34:14.883631 kernel: NUMA: Failed to initialise from firmware Oct 27 23:34:14.883638 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 27 23:34:14.883644 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Oct 27 23:34:14.883649 kernel: Zone ranges: Oct 27 23:34:14.883655 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 27 23:34:14.883663 kernel: DMA32 empty Oct 27 23:34:14.883669 kernel: Normal empty Oct 27 23:34:14.883675 kernel: Movable zone start for each node Oct 27 23:34:14.883681 kernel: Early memory node ranges Oct 27 23:34:14.883687 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Oct 27 23:34:14.883693 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Oct 27 23:34:14.883699 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Oct 27 23:34:14.883705 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Oct 27 23:34:14.883711 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Oct 27 23:34:14.883717 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Oct 27 23:34:14.883722 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Oct 27 23:34:14.883729 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Oct 27 23:34:14.883736 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Oct 27 23:34:14.883742 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 27 23:34:14.883748 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 27 23:34:14.883757 kernel: psci: probing for conduit method from ACPI. Oct 27 23:34:14.883763 kernel: psci: PSCIv1.1 detected in firmware. Oct 27 23:34:14.883769 kernel: psci: Using standard PSCI v0.2 function IDs Oct 27 23:34:14.883777 kernel: psci: Trusted OS migration not required Oct 27 23:34:14.883783 kernel: psci: SMC Calling Convention v1.1 Oct 27 23:34:14.883790 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 27 23:34:14.883796 kernel: percpu: Embedded 31 pages/cpu s86120 r8192 d32664 u126976 Oct 27 23:34:14.883803 kernel: pcpu-alloc: s86120 r8192 d32664 u126976 alloc=31*4096 Oct 27 23:34:14.883810 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 27 23:34:14.883816 kernel: Detected PIPT I-cache on CPU0 Oct 27 23:34:14.883822 kernel: CPU features: detected: GIC system register CPU interface Oct 27 23:34:14.883829 kernel: CPU features: detected: Hardware dirty bit management Oct 27 23:34:14.883835 kernel: CPU features: detected: Spectre-v4 Oct 27 23:34:14.883843 kernel: CPU features: detected: Spectre-BHB Oct 27 23:34:14.883849 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 27 23:34:14.883856 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 27 23:34:14.883862 kernel: CPU features: detected: ARM erratum 1418040 Oct 27 23:34:14.883868 kernel: CPU features: detected: SSBS not fully self-synchronizing Oct 27 23:34:14.883875 kernel: alternatives: applying boot alternatives Oct 27 23:34:14.883882 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e7e3bb3d45cdf83dc44aaf22327a51afe76152af638616b83c00ab1a45937f6d Oct 27 23:34:14.883889 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 27 23:34:14.883895 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 27 23:34:14.883902 kernel: Fallback order for Node 0: 0 Oct 27 23:34:14.883908 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 27 23:34:14.883916 kernel: Policy zone: DMA Oct 27 23:34:14.883922 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 27 23:34:14.883929 kernel: software IO TLB: area num 4. Oct 27 23:34:14.883935 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Oct 27 23:34:14.883942 kernel: Memory: 2387408K/2572288K available (10368K kernel code, 2180K rwdata, 8104K rodata, 38400K init, 897K bss, 184880K reserved, 0K cma-reserved) Oct 27 23:34:14.883948 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 27 23:34:14.883955 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 27 23:34:14.883962 kernel: rcu: RCU event tracing is enabled. Oct 27 23:34:14.883968 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 27 23:34:14.883975 kernel: Trampoline variant of Tasks RCU enabled. Oct 27 23:34:14.883981 kernel: Tracing variant of Tasks RCU enabled. Oct 27 23:34:14.883988 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 27 23:34:14.883995 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 27 23:34:14.884002 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 27 23:34:14.884008 kernel: GICv3: 256 SPIs implemented Oct 27 23:34:14.884014 kernel: GICv3: 0 Extended SPIs implemented Oct 27 23:34:14.884021 kernel: Root IRQ handler: gic_handle_irq Oct 27 23:34:14.884027 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Oct 27 23:34:14.884033 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 27 23:34:14.884040 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 27 23:34:14.884046 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Oct 27 23:34:14.884053 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Oct 27 23:34:14.884059 kernel: GICv3: using LPI property table @0x00000000400f0000 Oct 27 23:34:14.884067 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Oct 27 23:34:14.884074 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Oct 27 23:34:14.884080 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 23:34:14.884086 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 27 23:34:14.884093 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 27 23:34:14.884100 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 27 23:34:14.884106 kernel: arm-pv: using stolen time PV Oct 27 23:34:14.884113 kernel: Console: colour dummy device 80x25 Oct 27 23:34:14.884119 kernel: ACPI: Core revision 20230628 Oct 27 23:34:14.884126 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 27 23:34:14.884133 kernel: pid_max: default: 32768 minimum: 301 Oct 27 23:34:14.884141 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Oct 27 23:34:14.884147 kernel: landlock: Up and running. Oct 27 23:34:14.884154 kernel: SELinux: Initializing. Oct 27 23:34:14.884160 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 23:34:14.884167 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 27 23:34:14.884174 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 23:34:14.884180 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Oct 27 23:34:14.884187 kernel: rcu: Hierarchical SRCU implementation. Oct 27 23:34:14.884194 kernel: rcu: Max phase no-delay instances is 400. Oct 27 23:34:14.884202 kernel: Platform MSI: ITS@0x8080000 domain created Oct 27 23:34:14.884208 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 27 23:34:14.884215 kernel: Remapping and enabling EFI services. Oct 27 23:34:14.884221 kernel: smp: Bringing up secondary CPUs ... Oct 27 23:34:14.884228 kernel: Detected PIPT I-cache on CPU1 Oct 27 23:34:14.884234 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 27 23:34:14.884241 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Oct 27 23:34:14.884248 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 23:34:14.884254 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 27 23:34:14.884262 kernel: Detected PIPT I-cache on CPU2 Oct 27 23:34:14.884269 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 27 23:34:14.884280 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Oct 27 23:34:14.884288 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 23:34:14.884295 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 27 23:34:14.884302 kernel: Detected PIPT I-cache on CPU3 Oct 27 23:34:14.884309 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 27 23:34:14.884316 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Oct 27 23:34:14.884324 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 27 23:34:14.884331 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 27 23:34:14.884338 kernel: smp: Brought up 1 node, 4 CPUs Oct 27 23:34:14.884345 kernel: SMP: Total of 4 processors activated. Oct 27 23:34:14.884351 kernel: CPU features: detected: 32-bit EL0 Support Oct 27 23:34:14.884358 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 27 23:34:14.884365 kernel: CPU features: detected: Common not Private translations Oct 27 23:34:14.884372 kernel: CPU features: detected: CRC32 instructions Oct 27 23:34:14.884379 kernel: CPU features: detected: Enhanced Virtualization Traps Oct 27 23:34:14.884387 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 27 23:34:14.884394 kernel: CPU features: detected: LSE atomic instructions Oct 27 23:34:14.884401 kernel: CPU features: detected: Privileged Access Never Oct 27 23:34:14.884408 kernel: CPU features: detected: RAS Extension Support Oct 27 23:34:14.884414 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 27 23:34:14.884421 kernel: CPU: All CPU(s) started at EL1 Oct 27 23:34:14.884428 kernel: alternatives: applying system-wide alternatives Oct 27 23:34:14.884435 kernel: devtmpfs: initialized Oct 27 23:34:14.884442 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 27 23:34:14.884450 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 27 23:34:14.884457 kernel: pinctrl core: initialized pinctrl subsystem Oct 27 23:34:14.884464 kernel: SMBIOS 3.0.0 present. Oct 27 23:34:14.884471 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Oct 27 23:34:14.884478 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 27 23:34:14.884485 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 27 23:34:14.884492 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 27 23:34:14.884499 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 27 23:34:14.884506 kernel: audit: initializing netlink subsys (disabled) Oct 27 23:34:14.884514 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Oct 27 23:34:14.884521 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 27 23:34:14.884527 kernel: cpuidle: using governor menu Oct 27 23:34:14.884534 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 27 23:34:14.884541 kernel: ASID allocator initialised with 32768 entries Oct 27 23:34:14.884549 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 27 23:34:14.884561 kernel: Serial: AMBA PL011 UART driver Oct 27 23:34:14.884619 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Oct 27 23:34:14.884627 kernel: Modules: 0 pages in range for non-PLT usage Oct 27 23:34:14.884636 kernel: Modules: 509248 pages in range for PLT usage Oct 27 23:34:14.884643 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Oct 27 23:34:14.884650 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Oct 27 23:34:14.884657 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Oct 27 23:34:14.884664 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Oct 27 23:34:14.884671 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Oct 27 23:34:14.884678 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Oct 27 23:34:14.884684 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Oct 27 23:34:14.884691 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Oct 27 23:34:14.884699 kernel: ACPI: Added _OSI(Module Device) Oct 27 23:34:14.884706 kernel: ACPI: Added _OSI(Processor Device) Oct 27 23:34:14.884713 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 27 23:34:14.884720 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 27 23:34:14.884727 kernel: ACPI: Interpreter enabled Oct 27 23:34:14.884734 kernel: ACPI: Using GIC for interrupt routing Oct 27 23:34:14.884741 kernel: ACPI: MCFG table detected, 1 entries Oct 27 23:34:14.884748 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 27 23:34:14.884755 kernel: printk: console [ttyAMA0] enabled Oct 27 23:34:14.884763 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 27 23:34:14.884901 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 27 23:34:14.884973 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 27 23:34:14.885036 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 27 23:34:14.885096 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 27 23:34:14.885177 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 27 23:34:14.885188 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 27 23:34:14.885198 kernel: PCI host bridge to bus 0000:00 Oct 27 23:34:14.885270 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 27 23:34:14.885328 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 27 23:34:14.885384 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 27 23:34:14.885439 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 27 23:34:14.885515 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 27 23:34:14.885617 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 27 23:34:14.885690 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 27 23:34:14.885754 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 27 23:34:14.885817 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 27 23:34:14.885879 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 27 23:34:14.885942 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 27 23:34:14.886004 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 27 23:34:14.886061 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 27 23:34:14.886120 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 27 23:34:14.886176 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 27 23:34:14.886185 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 27 23:34:14.886193 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 27 23:34:14.886200 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 27 23:34:14.886207 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 27 23:34:14.886213 kernel: iommu: Default domain type: Translated Oct 27 23:34:14.886221 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 27 23:34:14.886229 kernel: efivars: Registered efivars operations Oct 27 23:34:14.886236 kernel: vgaarb: loaded Oct 27 23:34:14.886243 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 27 23:34:14.886250 kernel: VFS: Disk quotas dquot_6.6.0 Oct 27 23:34:14.886257 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 27 23:34:14.886264 kernel: pnp: PnP ACPI init Oct 27 23:34:14.886332 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 27 23:34:14.886342 kernel: pnp: PnP ACPI: found 1 devices Oct 27 23:34:14.886351 kernel: NET: Registered PF_INET protocol family Oct 27 23:34:14.886358 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 27 23:34:14.886365 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 27 23:34:14.886373 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 27 23:34:14.886380 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 27 23:34:14.886387 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Oct 27 23:34:14.886394 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 27 23:34:14.886401 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 23:34:14.886408 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 27 23:34:14.886416 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 27 23:34:14.886423 kernel: PCI: CLS 0 bytes, default 64 Oct 27 23:34:14.886430 kernel: kvm [1]: HYP mode not available Oct 27 23:34:14.886436 kernel: Initialise system trusted keyrings Oct 27 23:34:14.886443 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 27 23:34:14.886450 kernel: Key type asymmetric registered Oct 27 23:34:14.886457 kernel: Asymmetric key parser 'x509' registered Oct 27 23:34:14.886464 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Oct 27 23:34:14.886471 kernel: io scheduler mq-deadline registered Oct 27 23:34:14.886480 kernel: io scheduler kyber registered Oct 27 23:34:14.886486 kernel: io scheduler bfq registered Oct 27 23:34:14.886493 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 27 23:34:14.886500 kernel: ACPI: button: Power Button [PWRB] Oct 27 23:34:14.886508 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 27 23:34:14.886590 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 27 23:34:14.886601 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 27 23:34:14.886608 kernel: thunder_xcv, ver 1.0 Oct 27 23:34:14.886615 kernel: thunder_bgx, ver 1.0 Oct 27 23:34:14.886624 kernel: nicpf, ver 1.0 Oct 27 23:34:14.886631 kernel: nicvf, ver 1.0 Oct 27 23:34:14.886703 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 27 23:34:14.886765 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-10-27T23:34:14 UTC (1761608054) Oct 27 23:34:14.886774 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 27 23:34:14.886781 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 27 23:34:14.886789 kernel: watchdog: Delayed init of the lockup detector failed: -19 Oct 27 23:34:14.886796 kernel: watchdog: Hard watchdog permanently disabled Oct 27 23:34:14.886805 kernel: NET: Registered PF_INET6 protocol family Oct 27 23:34:14.886812 kernel: Segment Routing with IPv6 Oct 27 23:34:14.886819 kernel: In-situ OAM (IOAM) with IPv6 Oct 27 23:34:14.886826 kernel: NET: Registered PF_PACKET protocol family Oct 27 23:34:14.886833 kernel: Key type dns_resolver registered Oct 27 23:34:14.886840 kernel: registered taskstats version 1 Oct 27 23:34:14.886847 kernel: Loading compiled-in X.509 certificates Oct 27 23:34:14.886854 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: 410133625b419ed591a9386099a5a05c0b3153a6' Oct 27 23:34:14.886861 kernel: Key type .fscrypt registered Oct 27 23:34:14.886869 kernel: Key type fscrypt-provisioning registered Oct 27 23:34:14.886876 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 27 23:34:14.886883 kernel: ima: Allocated hash algorithm: sha1 Oct 27 23:34:14.886890 kernel: ima: No architecture policies found Oct 27 23:34:14.886896 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 27 23:34:14.886903 kernel: clk: Disabling unused clocks Oct 27 23:34:14.886910 kernel: Freeing unused kernel memory: 38400K Oct 27 23:34:14.886917 kernel: Run /init as init process Oct 27 23:34:14.886924 kernel: with arguments: Oct 27 23:34:14.886931 kernel: /init Oct 27 23:34:14.886939 kernel: with environment: Oct 27 23:34:14.886945 kernel: HOME=/ Oct 27 23:34:14.886952 kernel: TERM=linux Oct 27 23:34:14.886960 systemd[1]: Successfully made /usr/ read-only. Oct 27 23:34:14.886970 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 23:34:14.886978 systemd[1]: Detected virtualization kvm. Oct 27 23:34:14.886985 systemd[1]: Detected architecture arm64. Oct 27 23:34:14.886993 systemd[1]: Running in initrd. Oct 27 23:34:14.887001 systemd[1]: No hostname configured, using default hostname. Oct 27 23:34:14.887008 systemd[1]: Hostname set to . Oct 27 23:34:14.887016 systemd[1]: Initializing machine ID from VM UUID. Oct 27 23:34:14.887023 kernel: hrtimer: interrupt took 6444640 ns Oct 27 23:34:14.887030 systemd[1]: Queued start job for default target initrd.target. Oct 27 23:34:14.887038 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 23:34:14.887045 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 23:34:14.887055 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Oct 27 23:34:14.887062 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 23:34:14.887070 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Oct 27 23:34:14.887078 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Oct 27 23:34:14.887087 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Oct 27 23:34:14.887095 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Oct 27 23:34:14.887102 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 23:34:14.887112 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 23:34:14.887119 systemd[1]: Reached target paths.target - Path Units. Oct 27 23:34:14.887127 systemd[1]: Reached target slices.target - Slice Units. Oct 27 23:34:14.887134 systemd[1]: Reached target swap.target - Swaps. Oct 27 23:34:14.887142 systemd[1]: Reached target timers.target - Timer Units. Oct 27 23:34:14.887149 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 23:34:14.887157 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 23:34:14.887164 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Oct 27 23:34:14.887172 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Oct 27 23:34:14.887181 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 23:34:14.887189 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 23:34:14.887196 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 23:34:14.887204 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 23:34:14.887212 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Oct 27 23:34:14.887219 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 23:34:14.887227 systemd[1]: Finished network-cleanup.service - Network Cleanup. Oct 27 23:34:14.887234 systemd[1]: Starting systemd-fsck-usr.service... Oct 27 23:34:14.887243 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 23:34:14.887251 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 23:34:14.887258 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 23:34:14.887266 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Oct 27 23:34:14.887273 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 23:34:14.887281 systemd[1]: Finished systemd-fsck-usr.service. Oct 27 23:34:14.887307 systemd-journald[239]: Collecting audit messages is disabled. Oct 27 23:34:14.887325 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 23:34:14.887334 systemd-journald[239]: Journal started Oct 27 23:34:14.887353 systemd-journald[239]: Runtime Journal (/run/log/journal/15965eabc2a74e8bad6821093ff50c63) is 5.9M, max 47.3M, 41.4M free. Oct 27 23:34:14.879359 systemd-modules-load[240]: Inserted module 'overlay' Oct 27 23:34:14.897752 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 27 23:34:14.897771 kernel: Bridge firewalling registered Oct 27 23:34:14.897781 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 23:34:14.893689 systemd-modules-load[240]: Inserted module 'br_netfilter' Oct 27 23:34:14.899435 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Oct 27 23:34:14.903789 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 23:34:14.904200 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 23:34:14.907915 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Oct 27 23:34:14.922815 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 23:34:14.924471 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 23:34:14.928719 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 23:34:14.930224 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 23:34:14.934743 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Oct 27 23:34:14.936946 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 23:34:14.940661 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 23:34:14.942736 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 23:34:14.946668 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 23:34:14.949674 dracut-cmdline[274]: dracut-dracut-053 Oct 27 23:34:14.949674 dracut-cmdline[274]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e7e3bb3d45cdf83dc44aaf22327a51afe76152af638616b83c00ab1a45937f6d Oct 27 23:34:14.977858 systemd-resolved[287]: Positive Trust Anchors: Oct 27 23:34:14.977878 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 23:34:14.977908 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 23:34:14.982618 systemd-resolved[287]: Defaulting to hostname 'linux'. Oct 27 23:34:14.986490 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 23:34:14.988956 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 23:34:15.013604 kernel: SCSI subsystem initialized Oct 27 23:34:15.017600 kernel: Loading iSCSI transport class v2.0-870. Oct 27 23:34:15.025600 kernel: iscsi: registered transport (tcp) Oct 27 23:34:15.038591 kernel: iscsi: registered transport (qla4xxx) Oct 27 23:34:15.038615 kernel: QLogic iSCSI HBA Driver Oct 27 23:34:15.079376 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Oct 27 23:34:15.095890 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Oct 27 23:34:15.112611 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 27 23:34:15.112704 kernel: device-mapper: uevent: version 1.0.3 Oct 27 23:34:15.112725 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Oct 27 23:34:15.158589 kernel: raid6: neonx8 gen() 15789 MB/s Oct 27 23:34:15.175588 kernel: raid6: neonx4 gen() 15801 MB/s Oct 27 23:34:15.192597 kernel: raid6: neonx2 gen() 13217 MB/s Oct 27 23:34:15.209583 kernel: raid6: neonx1 gen() 10508 MB/s Oct 27 23:34:15.226582 kernel: raid6: int64x8 gen() 6792 MB/s Oct 27 23:34:15.243584 kernel: raid6: int64x4 gen() 7349 MB/s Oct 27 23:34:15.260589 kernel: raid6: int64x2 gen() 6106 MB/s Oct 27 23:34:15.277598 kernel: raid6: int64x1 gen() 5053 MB/s Oct 27 23:34:15.277611 kernel: raid6: using algorithm neonx4 gen() 15801 MB/s Oct 27 23:34:15.295592 kernel: raid6: .... xor() 12460 MB/s, rmw enabled Oct 27 23:34:15.295621 kernel: raid6: using neon recovery algorithm Oct 27 23:34:15.300869 kernel: xor: measuring software checksum speed Oct 27 23:34:15.300886 kernel: 8regs : 21607 MB/sec Oct 27 23:34:15.301583 kernel: 32regs : 21693 MB/sec Oct 27 23:34:15.301597 kernel: arm64_neon : 24787 MB/sec Oct 27 23:34:15.302611 kernel: xor: using function: arm64_neon (24787 MB/sec) Oct 27 23:34:15.349598 kernel: Btrfs loaded, zoned=no, fsverity=no Oct 27 23:34:15.360379 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Oct 27 23:34:15.379828 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 23:34:15.393213 systemd-udevd[464]: Using default interface naming scheme 'v255'. Oct 27 23:34:15.396839 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 23:34:15.407892 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Oct 27 23:34:15.418667 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Oct 27 23:34:15.444413 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 23:34:15.455799 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 23:34:15.496217 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 23:34:15.506765 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Oct 27 23:34:15.518240 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Oct 27 23:34:15.520403 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 23:34:15.522613 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 23:34:15.524291 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 23:34:15.534959 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Oct 27 23:34:15.545618 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Oct 27 23:34:15.555538 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Oct 27 23:34:15.557076 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Oct 27 23:34:15.563139 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 27 23:34:15.563261 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 23:34:15.573134 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 27 23:34:15.573156 kernel: GPT:9289727 != 19775487 Oct 27 23:34:15.573167 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 27 23:34:15.573183 kernel: GPT:9289727 != 19775487 Oct 27 23:34:15.573194 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 27 23:34:15.573203 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 23:34:15.573009 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 23:34:15.574522 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 23:34:15.574699 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 23:34:15.578040 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 23:34:15.586860 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 23:34:15.599604 kernel: BTRFS: device fsid 723df9de-b44a-4541-8b84-1b67589aa78f devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (528) Oct 27 23:34:15.601606 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 23:34:15.606590 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (518) Oct 27 23:34:15.611002 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Oct 27 23:34:15.623002 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Oct 27 23:34:15.633710 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Oct 27 23:34:15.634937 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Oct 27 23:34:15.644244 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 23:34:15.654718 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Oct 27 23:34:15.656479 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Oct 27 23:34:15.662503 disk-uuid[554]: Primary Header is updated. Oct 27 23:34:15.662503 disk-uuid[554]: Secondary Entries is updated. Oct 27 23:34:15.662503 disk-uuid[554]: Secondary Header is updated. Oct 27 23:34:15.667584 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 23:34:15.683172 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 23:34:16.671599 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 27 23:34:16.672436 disk-uuid[555]: The operation has completed successfully. Oct 27 23:34:16.698905 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 27 23:34:16.699011 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Oct 27 23:34:16.734712 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Oct 27 23:34:16.737506 sh[576]: Success Oct 27 23:34:16.747617 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 27 23:34:16.775826 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Oct 27 23:34:16.784008 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Oct 27 23:34:16.785475 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Oct 27 23:34:16.798611 kernel: BTRFS info (device dm-0): first mount of filesystem 723df9de-b44a-4541-8b84-1b67589aa78f Oct 27 23:34:16.798663 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Oct 27 23:34:16.798674 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Oct 27 23:34:16.801039 kernel: BTRFS info (device dm-0): disabling log replay at mount time Oct 27 23:34:16.801058 kernel: BTRFS info (device dm-0): using free space tree Oct 27 23:34:16.804989 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Oct 27 23:34:16.806318 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Oct 27 23:34:16.814723 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Oct 27 23:34:16.816283 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Oct 27 23:34:16.830590 kernel: BTRFS info (device vda6): first mount of filesystem 232c0498-06c4-4cb2-9fe9-f3d47991f5ef Oct 27 23:34:16.830642 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 27 23:34:16.830654 kernel: BTRFS info (device vda6): using free space tree Oct 27 23:34:16.833586 kernel: BTRFS info (device vda6): auto enabling async discard Oct 27 23:34:16.838608 kernel: BTRFS info (device vda6): last unmount of filesystem 232c0498-06c4-4cb2-9fe9-f3d47991f5ef Oct 27 23:34:16.843065 systemd[1]: Finished ignition-setup.service - Ignition (setup). Oct 27 23:34:16.848845 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Oct 27 23:34:16.910962 ignition[664]: Ignition 2.20.0 Oct 27 23:34:16.910972 ignition[664]: Stage: fetch-offline Oct 27 23:34:16.911010 ignition[664]: no configs at "/usr/lib/ignition/base.d" Oct 27 23:34:16.911018 ignition[664]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:34:16.913534 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 23:34:16.911172 ignition[664]: parsed url from cmdline: "" Oct 27 23:34:16.911176 ignition[664]: no config URL provided Oct 27 23:34:16.911180 ignition[664]: reading system config file "/usr/lib/ignition/user.ign" Oct 27 23:34:16.911187 ignition[664]: no config at "/usr/lib/ignition/user.ign" Oct 27 23:34:16.911210 ignition[664]: op(1): [started] loading QEMU firmware config module Oct 27 23:34:16.911215 ignition[664]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 27 23:34:16.920122 ignition[664]: op(1): [finished] loading QEMU firmware config module Oct 27 23:34:16.924755 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 23:34:16.946238 systemd-networkd[765]: lo: Link UP Oct 27 23:34:16.946251 systemd-networkd[765]: lo: Gained carrier Oct 27 23:34:16.947077 systemd-networkd[765]: Enumeration completed Oct 27 23:34:16.947308 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 23:34:16.947528 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 27 23:34:16.947531 systemd-networkd[765]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 23:34:16.948310 systemd-networkd[765]: eth0: Link UP Oct 27 23:34:16.948313 systemd-networkd[765]: eth0: Gained carrier Oct 27 23:34:16.948319 systemd-networkd[765]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 27 23:34:16.949724 systemd[1]: Reached target network.target - Network. Oct 27 23:34:16.974635 systemd-networkd[765]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 23:34:16.976300 ignition[664]: parsing config with SHA512: 84f1aa0fb0442afeb0529401fddb69c41fe285b96cf311d86b3a7ee0e2ad93e16063b43ac91568ba6b438b7cbd3a7dbc9335eddc095a18541b189b036a427752 Oct 27 23:34:16.980855 unknown[664]: fetched base config from "system" Oct 27 23:34:16.980865 unknown[664]: fetched user config from "qemu" Oct 27 23:34:16.981273 ignition[664]: fetch-offline: fetch-offline passed Oct 27 23:34:16.981347 ignition[664]: Ignition finished successfully Oct 27 23:34:16.984621 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 23:34:16.985836 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 27 23:34:16.995828 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Oct 27 23:34:17.008232 ignition[770]: Ignition 2.20.0 Oct 27 23:34:17.008241 ignition[770]: Stage: kargs Oct 27 23:34:17.008403 ignition[770]: no configs at "/usr/lib/ignition/base.d" Oct 27 23:34:17.008412 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:34:17.012246 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Oct 27 23:34:17.009310 ignition[770]: kargs: kargs passed Oct 27 23:34:17.009352 ignition[770]: Ignition finished successfully Oct 27 23:34:17.020772 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Oct 27 23:34:17.029759 ignition[779]: Ignition 2.20.0 Oct 27 23:34:17.029770 ignition[779]: Stage: disks Oct 27 23:34:17.029940 ignition[779]: no configs at "/usr/lib/ignition/base.d" Oct 27 23:34:17.029950 ignition[779]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:34:17.030819 ignition[779]: disks: disks passed Oct 27 23:34:17.032628 systemd[1]: Finished ignition-disks.service - Ignition (disks). Oct 27 23:34:17.030865 ignition[779]: Ignition finished successfully Oct 27 23:34:17.034562 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Oct 27 23:34:17.035993 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Oct 27 23:34:17.037841 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 23:34:17.039316 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 23:34:17.041049 systemd[1]: Reached target basic.target - Basic System. Oct 27 23:34:17.043728 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Oct 27 23:34:17.059260 systemd-fsck[790]: ROOT: clean, 14/553520 files, 52654/553472 blocks Oct 27 23:34:17.064270 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Oct 27 23:34:17.072699 systemd[1]: Mounting sysroot.mount - /sysroot... Oct 27 23:34:17.113590 kernel: EXT4-fs (vda9): mounted filesystem 14252103-6df9-4b3e-8ac7-75c6ad5090da r/w with ordered data mode. Quota mode: none. Oct 27 23:34:17.114250 systemd[1]: Mounted sysroot.mount - /sysroot. Oct 27 23:34:17.115578 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Oct 27 23:34:17.131743 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 23:34:17.133696 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Oct 27 23:34:17.134756 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Oct 27 23:34:17.134805 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 27 23:34:17.134869 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 23:34:17.144166 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (798) Oct 27 23:34:17.141826 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Oct 27 23:34:17.148709 kernel: BTRFS info (device vda6): first mount of filesystem 232c0498-06c4-4cb2-9fe9-f3d47991f5ef Oct 27 23:34:17.148728 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 27 23:34:17.148738 kernel: BTRFS info (device vda6): using free space tree Oct 27 23:34:17.143950 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Oct 27 23:34:17.152580 kernel: BTRFS info (device vda6): auto enabling async discard Oct 27 23:34:17.153364 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 23:34:17.187104 initrd-setup-root[822]: cut: /sysroot/etc/passwd: No such file or directory Oct 27 23:34:17.191838 initrd-setup-root[829]: cut: /sysroot/etc/group: No such file or directory Oct 27 23:34:17.196038 initrd-setup-root[836]: cut: /sysroot/etc/shadow: No such file or directory Oct 27 23:34:17.202412 initrd-setup-root[843]: cut: /sysroot/etc/gshadow: No such file or directory Oct 27 23:34:17.293581 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Oct 27 23:34:17.304697 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Oct 27 23:34:17.306718 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Oct 27 23:34:17.317640 kernel: BTRFS info (device vda6): last unmount of filesystem 232c0498-06c4-4cb2-9fe9-f3d47991f5ef Oct 27 23:34:17.331092 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Oct 27 23:34:17.341092 ignition[913]: INFO : Ignition 2.20.0 Oct 27 23:34:17.341092 ignition[913]: INFO : Stage: mount Oct 27 23:34:17.342670 ignition[913]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 23:34:17.342670 ignition[913]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:34:17.342670 ignition[913]: INFO : mount: mount passed Oct 27 23:34:17.342670 ignition[913]: INFO : Ignition finished successfully Oct 27 23:34:17.345698 systemd[1]: Finished ignition-mount.service - Ignition (mount). Oct 27 23:34:17.353779 systemd[1]: Starting ignition-files.service - Ignition (files)... Oct 27 23:34:17.928473 systemd[1]: sysroot-oem.mount: Deactivated successfully. Oct 27 23:34:17.944736 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Oct 27 23:34:17.950717 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (925) Oct 27 23:34:17.953191 kernel: BTRFS info (device vda6): first mount of filesystem 232c0498-06c4-4cb2-9fe9-f3d47991f5ef Oct 27 23:34:17.953210 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 27 23:34:17.953220 kernel: BTRFS info (device vda6): using free space tree Oct 27 23:34:17.956604 kernel: BTRFS info (device vda6): auto enabling async discard Oct 27 23:34:17.957037 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Oct 27 23:34:17.976535 ignition[943]: INFO : Ignition 2.20.0 Oct 27 23:34:17.976535 ignition[943]: INFO : Stage: files Oct 27 23:34:17.978004 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 23:34:17.978004 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:34:17.978004 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Oct 27 23:34:17.980918 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 27 23:34:17.980918 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 27 23:34:17.984456 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 27 23:34:17.985747 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 27 23:34:17.985747 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 27 23:34:17.985457 unknown[943]: wrote ssh authorized keys file for user: core Oct 27 23:34:17.989162 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 27 23:34:17.989162 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Oct 27 23:34:17.992683 systemd-networkd[765]: eth0: Gained IPv6LL Oct 27 23:34:18.118595 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 27 23:34:18.691086 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Oct 27 23:34:18.693051 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 27 23:34:18.693051 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Oct 27 23:34:18.911365 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 27 23:34:19.008552 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Oct 27 23:34:19.008552 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Oct 27 23:34:19.012030 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Oct 27 23:34:19.012030 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Oct 27 23:34:19.012030 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Oct 27 23:34:19.012030 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 23:34:19.012030 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Oct 27 23:34:19.012030 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 23:34:19.012030 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Oct 27 23:34:19.012030 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 23:34:19.012030 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Oct 27 23:34:19.012030 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 27 23:34:19.012030 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 27 23:34:19.012030 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 27 23:34:19.012030 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Oct 27 23:34:19.328039 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Oct 27 23:34:19.617595 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Oct 27 23:34:19.617595 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Oct 27 23:34:19.621237 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 23:34:19.621237 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Oct 27 23:34:19.621237 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Oct 27 23:34:19.621237 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Oct 27 23:34:19.621237 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 23:34:19.621237 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 27 23:34:19.621237 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Oct 27 23:34:19.621237 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 27 23:34:19.636145 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 23:34:19.639170 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 27 23:34:19.640795 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 27 23:34:19.640795 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Oct 27 23:34:19.640795 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Oct 27 23:34:19.640795 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 27 23:34:19.640795 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 27 23:34:19.640795 ignition[943]: INFO : files: files passed Oct 27 23:34:19.640795 ignition[943]: INFO : Ignition finished successfully Oct 27 23:34:19.641216 systemd[1]: Finished ignition-files.service - Ignition (files). Oct 27 23:34:19.652763 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Oct 27 23:34:19.655197 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Oct 27 23:34:19.656887 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 27 23:34:19.656966 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Oct 27 23:34:19.663034 initrd-setup-root-after-ignition[970]: grep: /sysroot/oem/oem-release: No such file or directory Oct 27 23:34:19.665768 initrd-setup-root-after-ignition[972]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 23:34:19.665768 initrd-setup-root-after-ignition[972]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Oct 27 23:34:19.669577 initrd-setup-root-after-ignition[976]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 27 23:34:19.669969 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 23:34:19.672263 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Oct 27 23:34:19.684739 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Oct 27 23:34:19.704087 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 27 23:34:19.705086 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Oct 27 23:34:19.706697 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Oct 27 23:34:19.708250 systemd[1]: Reached target initrd.target - Initrd Default Target. Oct 27 23:34:19.710112 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Oct 27 23:34:19.717730 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Oct 27 23:34:19.729761 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 23:34:19.732403 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Oct 27 23:34:19.743176 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Oct 27 23:34:19.744375 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 23:34:19.746382 systemd[1]: Stopped target timers.target - Timer Units. Oct 27 23:34:19.748160 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 27 23:34:19.748287 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Oct 27 23:34:19.750598 systemd[1]: Stopped target initrd.target - Initrd Default Target. Oct 27 23:34:19.752578 systemd[1]: Stopped target basic.target - Basic System. Oct 27 23:34:19.754183 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Oct 27 23:34:19.755852 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Oct 27 23:34:19.757702 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Oct 27 23:34:19.759540 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Oct 27 23:34:19.761319 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Oct 27 23:34:19.763165 systemd[1]: Stopped target sysinit.target - System Initialization. Oct 27 23:34:19.764956 systemd[1]: Stopped target local-fs.target - Local File Systems. Oct 27 23:34:19.766487 systemd[1]: Stopped target swap.target - Swaps. Oct 27 23:34:19.767992 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 27 23:34:19.768125 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Oct 27 23:34:19.770421 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Oct 27 23:34:19.772460 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 23:34:19.774305 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Oct 27 23:34:19.775629 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 23:34:19.777173 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 27 23:34:19.777288 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Oct 27 23:34:19.779978 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 27 23:34:19.780100 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Oct 27 23:34:19.781943 systemd[1]: Stopped target paths.target - Path Units. Oct 27 23:34:19.783443 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 27 23:34:19.786640 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 23:34:19.787828 systemd[1]: Stopped target slices.target - Slice Units. Oct 27 23:34:19.789725 systemd[1]: Stopped target sockets.target - Socket Units. Oct 27 23:34:19.791169 systemd[1]: iscsid.socket: Deactivated successfully. Oct 27 23:34:19.791254 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Oct 27 23:34:19.792658 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 27 23:34:19.792738 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Oct 27 23:34:19.794265 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 27 23:34:19.794377 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Oct 27 23:34:19.795995 systemd[1]: ignition-files.service: Deactivated successfully. Oct 27 23:34:19.796098 systemd[1]: Stopped ignition-files.service - Ignition (files). Oct 27 23:34:19.809733 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Oct 27 23:34:19.810553 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 27 23:34:19.810700 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 23:34:19.814752 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Oct 27 23:34:19.815520 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 27 23:34:19.815654 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 23:34:19.817474 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 27 23:34:19.817618 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Oct 27 23:34:19.824918 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 27 23:34:19.825848 ignition[997]: INFO : Ignition 2.20.0 Oct 27 23:34:19.825848 ignition[997]: INFO : Stage: umount Oct 27 23:34:19.825848 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 27 23:34:19.825848 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 27 23:34:19.834492 ignition[997]: INFO : umount: umount passed Oct 27 23:34:19.834492 ignition[997]: INFO : Ignition finished successfully Oct 27 23:34:19.827140 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 27 23:34:19.827230 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Oct 27 23:34:19.828696 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 27 23:34:19.828871 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Oct 27 23:34:19.830781 systemd[1]: Stopped target network.target - Network. Oct 27 23:34:19.831714 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 27 23:34:19.831788 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Oct 27 23:34:19.833508 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 27 23:34:19.833562 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Oct 27 23:34:19.835589 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 27 23:34:19.835643 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Oct 27 23:34:19.837228 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Oct 27 23:34:19.837270 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Oct 27 23:34:19.838973 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Oct 27 23:34:19.840689 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Oct 27 23:34:19.851430 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 27 23:34:19.851544 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Oct 27 23:34:19.855360 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Oct 27 23:34:19.855591 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 27 23:34:19.855795 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Oct 27 23:34:19.858904 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Oct 27 23:34:19.859744 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 27 23:34:19.859807 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Oct 27 23:34:19.872668 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Oct 27 23:34:19.873530 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 27 23:34:19.873624 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Oct 27 23:34:19.875797 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 27 23:34:19.875843 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 27 23:34:19.878877 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 27 23:34:19.878921 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Oct 27 23:34:19.880693 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Oct 27 23:34:19.880802 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 23:34:19.883647 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 23:34:19.887694 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 27 23:34:19.887755 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Oct 27 23:34:19.894311 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 27 23:34:19.894432 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Oct 27 23:34:19.900063 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 27 23:34:19.900171 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Oct 27 23:34:19.902118 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 27 23:34:19.902235 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 23:34:19.904231 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 27 23:34:19.904292 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Oct 27 23:34:19.905365 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 27 23:34:19.905395 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 23:34:19.907341 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 27 23:34:19.907390 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Oct 27 23:34:19.909882 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 27 23:34:19.909926 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Oct 27 23:34:19.912458 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 27 23:34:19.912506 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Oct 27 23:34:19.915336 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 27 23:34:19.915383 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Oct 27 23:34:19.930718 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Oct 27 23:34:19.931717 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 27 23:34:19.931771 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 23:34:19.934889 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 27 23:34:19.934930 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 23:34:19.938744 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 27 23:34:19.938797 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Oct 27 23:34:19.939103 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 27 23:34:19.939194 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Oct 27 23:34:19.941918 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Oct 27 23:34:19.944325 systemd[1]: Starting initrd-switch-root.service - Switch Root... Oct 27 23:34:19.953183 systemd[1]: Switching root. Oct 27 23:34:19.984493 systemd-journald[239]: Journal stopped Oct 27 23:34:20.706164 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). Oct 27 23:34:20.706216 kernel: SELinux: policy capability network_peer_controls=1 Oct 27 23:34:20.706228 kernel: SELinux: policy capability open_perms=1 Oct 27 23:34:20.706240 kernel: SELinux: policy capability extended_socket_class=1 Oct 27 23:34:20.706249 kernel: SELinux: policy capability always_check_network=0 Oct 27 23:34:20.706258 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 27 23:34:20.706270 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 27 23:34:20.706280 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 27 23:34:20.706291 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 27 23:34:20.706300 kernel: audit: type=1403 audit(1761608060.143:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 27 23:34:20.706310 systemd[1]: Successfully loaded SELinux policy in 32.274ms. Oct 27 23:34:20.706329 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.399ms. Oct 27 23:34:20.706340 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Oct 27 23:34:20.706350 systemd[1]: Detected virtualization kvm. Oct 27 23:34:20.706366 systemd[1]: Detected architecture arm64. Oct 27 23:34:20.706383 systemd[1]: Detected first boot. Oct 27 23:34:20.706393 systemd[1]: Initializing machine ID from VM UUID. Oct 27 23:34:20.706403 zram_generator::config[1046]: No configuration found. Oct 27 23:34:20.706415 kernel: NET: Registered PF_VSOCK protocol family Oct 27 23:34:20.706424 systemd[1]: Populated /etc with preset unit settings. Oct 27 23:34:20.706435 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Oct 27 23:34:20.706445 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 27 23:34:20.706455 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Oct 27 23:34:20.706467 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 27 23:34:20.706477 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Oct 27 23:34:20.706487 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Oct 27 23:34:20.706498 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Oct 27 23:34:20.706512 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Oct 27 23:34:20.706522 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Oct 27 23:34:20.706543 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Oct 27 23:34:20.706555 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Oct 27 23:34:20.706574 systemd[1]: Created slice user.slice - User and Session Slice. Oct 27 23:34:20.706586 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Oct 27 23:34:20.706597 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Oct 27 23:34:20.706608 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Oct 27 23:34:20.706618 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Oct 27 23:34:20.706628 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Oct 27 23:34:20.706638 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Oct 27 23:34:20.706648 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Oct 27 23:34:20.706658 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Oct 27 23:34:20.706667 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Oct 27 23:34:20.706679 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Oct 27 23:34:20.706689 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Oct 27 23:34:20.706699 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Oct 27 23:34:20.706709 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Oct 27 23:34:20.706719 systemd[1]: Reached target remote-fs.target - Remote File Systems. Oct 27 23:34:20.706729 systemd[1]: Reached target slices.target - Slice Units. Oct 27 23:34:20.706739 systemd[1]: Reached target swap.target - Swaps. Oct 27 23:34:20.706749 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Oct 27 23:34:20.706761 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Oct 27 23:34:20.706772 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Oct 27 23:34:20.706782 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Oct 27 23:34:20.706791 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Oct 27 23:34:20.706801 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Oct 27 23:34:20.706811 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Oct 27 23:34:20.706821 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Oct 27 23:34:20.706832 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Oct 27 23:34:20.706842 systemd[1]: Mounting media.mount - External Media Directory... Oct 27 23:34:20.706853 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Oct 27 23:34:20.706868 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Oct 27 23:34:20.706878 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Oct 27 23:34:20.706888 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 27 23:34:20.706899 systemd[1]: Reached target machines.target - Containers. Oct 27 23:34:20.706908 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Oct 27 23:34:20.706919 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 23:34:20.706930 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Oct 27 23:34:20.706942 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Oct 27 23:34:20.706952 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 23:34:20.706961 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 23:34:20.706971 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 23:34:20.706982 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Oct 27 23:34:20.706993 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 23:34:20.707003 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 27 23:34:20.707013 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 27 23:34:20.707024 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Oct 27 23:34:20.707034 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 27 23:34:20.707044 systemd[1]: Stopped systemd-fsck-usr.service. Oct 27 23:34:20.707054 kernel: loop: module loaded Oct 27 23:34:20.707064 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 23:34:20.707073 kernel: fuse: init (API version 7.39) Oct 27 23:34:20.707082 kernel: ACPI: bus type drm_connector registered Oct 27 23:34:20.707092 systemd[1]: Starting systemd-journald.service - Journal Service... Oct 27 23:34:20.707102 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Oct 27 23:34:20.707114 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Oct 27 23:34:20.707125 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Oct 27 23:34:20.707150 systemd-journald[1114]: Collecting audit messages is disabled. Oct 27 23:34:20.707171 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Oct 27 23:34:20.707182 systemd-journald[1114]: Journal started Oct 27 23:34:20.707202 systemd-journald[1114]: Runtime Journal (/run/log/journal/15965eabc2a74e8bad6821093ff50c63) is 5.9M, max 47.3M, 41.4M free. Oct 27 23:34:20.513417 systemd[1]: Queued start job for default target multi-user.target. Oct 27 23:34:20.526490 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Oct 27 23:34:20.526907 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 27 23:34:20.712488 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Oct 27 23:34:20.712660 systemd[1]: verity-setup.service: Deactivated successfully. Oct 27 23:34:20.714049 systemd[1]: Stopped verity-setup.service. Oct 27 23:34:20.719040 systemd[1]: Started systemd-journald.service - Journal Service. Oct 27 23:34:20.719643 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Oct 27 23:34:20.720729 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Oct 27 23:34:20.721924 systemd[1]: Mounted media.mount - External Media Directory. Oct 27 23:34:20.723006 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Oct 27 23:34:20.724193 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Oct 27 23:34:20.725383 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Oct 27 23:34:20.726672 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Oct 27 23:34:20.729606 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Oct 27 23:34:20.731027 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 27 23:34:20.731181 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Oct 27 23:34:20.732657 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 23:34:20.732824 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 23:34:20.734179 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 23:34:20.735672 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 23:34:20.736992 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 23:34:20.737142 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 23:34:20.738679 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 27 23:34:20.738861 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Oct 27 23:34:20.740293 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 23:34:20.740451 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 23:34:20.741861 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Oct 27 23:34:20.743367 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Oct 27 23:34:20.744924 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Oct 27 23:34:20.746406 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Oct 27 23:34:20.759069 systemd[1]: Reached target network-pre.target - Preparation for Network. Oct 27 23:34:20.769666 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Oct 27 23:34:20.771672 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Oct 27 23:34:20.772748 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 27 23:34:20.772785 systemd[1]: Reached target local-fs.target - Local File Systems. Oct 27 23:34:20.774626 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Oct 27 23:34:20.776907 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Oct 27 23:34:20.779087 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Oct 27 23:34:20.780261 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 23:34:20.781472 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Oct 27 23:34:20.784516 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Oct 27 23:34:20.785848 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 23:34:20.789660 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Oct 27 23:34:20.791111 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 23:34:20.792755 systemd-journald[1114]: Time spent on flushing to /var/log/journal/15965eabc2a74e8bad6821093ff50c63 is 21.041ms for 867 entries. Oct 27 23:34:20.792755 systemd-journald[1114]: System Journal (/var/log/journal/15965eabc2a74e8bad6821093ff50c63) is 8M, max 195.6M, 187.6M free. Oct 27 23:34:20.830842 systemd-journald[1114]: Received client request to flush runtime journal. Oct 27 23:34:20.830889 kernel: loop0: detected capacity change from 0 to 113512 Oct 27 23:34:20.794329 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 23:34:20.796823 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Oct 27 23:34:20.801778 systemd[1]: Starting systemd-sysusers.service - Create System Users... Oct 27 23:34:20.807166 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Oct 27 23:34:20.811058 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Oct 27 23:34:20.813182 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Oct 27 23:34:20.816580 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Oct 27 23:34:20.818512 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Oct 27 23:34:20.822474 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 23:34:20.826941 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Oct 27 23:34:20.834745 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Oct 27 23:34:20.838758 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Oct 27 23:34:20.844005 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Oct 27 23:34:20.846630 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Oct 27 23:34:20.848482 systemd[1]: Finished systemd-sysusers.service - Create System Users. Oct 27 23:34:20.857082 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Oct 27 23:34:20.857611 kernel: loop1: detected capacity change from 0 to 207008 Oct 27 23:34:20.861196 udevadm[1179]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 27 23:34:20.862791 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Oct 27 23:34:20.878370 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Oct 27 23:34:20.878396 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Oct 27 23:34:20.883141 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Oct 27 23:34:20.893688 kernel: loop2: detected capacity change from 0 to 123192 Oct 27 23:34:20.933599 kernel: loop3: detected capacity change from 0 to 113512 Oct 27 23:34:20.939598 kernel: loop4: detected capacity change from 0 to 207008 Oct 27 23:34:20.947591 kernel: loop5: detected capacity change from 0 to 123192 Oct 27 23:34:20.952688 (sd-merge)[1190]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Oct 27 23:34:20.953075 (sd-merge)[1190]: Merged extensions into '/usr'. Oct 27 23:34:20.956313 systemd[1]: Reload requested from client PID 1163 ('systemd-sysext') (unit systemd-sysext.service)... Oct 27 23:34:20.956328 systemd[1]: Reloading... Oct 27 23:34:21.022627 zram_generator::config[1217]: No configuration found. Oct 27 23:34:21.076976 ldconfig[1158]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 27 23:34:21.124164 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 27 23:34:21.173821 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 27 23:34:21.174294 systemd[1]: Reloading finished in 217 ms. Oct 27 23:34:21.191601 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Oct 27 23:34:21.192931 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Oct 27 23:34:21.206914 systemd[1]: Starting ensure-sysext.service... Oct 27 23:34:21.208723 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Oct 27 23:34:21.223228 systemd[1]: Reload requested from client PID 1253 ('systemctl') (unit ensure-sysext.service)... Oct 27 23:34:21.223243 systemd[1]: Reloading... Oct 27 23:34:21.224102 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 27 23:34:21.224635 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Oct 27 23:34:21.225368 systemd-tmpfiles[1254]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 27 23:34:21.225689 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Oct 27 23:34:21.225805 systemd-tmpfiles[1254]: ACLs are not supported, ignoring. Oct 27 23:34:21.228778 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 23:34:21.228884 systemd-tmpfiles[1254]: Skipping /boot Oct 27 23:34:21.237372 systemd-tmpfiles[1254]: Detected autofs mount point /boot during canonicalization of boot. Oct 27 23:34:21.237478 systemd-tmpfiles[1254]: Skipping /boot Oct 27 23:34:21.269586 zram_generator::config[1282]: No configuration found. Oct 27 23:34:21.352392 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 27 23:34:21.401792 systemd[1]: Reloading finished in 178 ms. Oct 27 23:34:21.415071 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Oct 27 23:34:21.433637 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Oct 27 23:34:21.442078 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 23:34:21.444554 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Oct 27 23:34:21.447306 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Oct 27 23:34:21.450657 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Oct 27 23:34:21.454024 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Oct 27 23:34:21.456625 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Oct 27 23:34:21.461989 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 23:34:21.464998 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 23:34:21.467296 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 23:34:21.471377 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 23:34:21.473278 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 23:34:21.473409 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 23:34:21.476729 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Oct 27 23:34:21.479095 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Oct 27 23:34:21.482412 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 23:34:21.482627 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 23:34:21.484326 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 23:34:21.484494 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 23:34:21.486707 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 23:34:21.486863 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 23:34:21.495900 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 23:34:21.498483 systemd-udevd[1327]: Using default interface naming scheme 'v255'. Oct 27 23:34:21.508057 augenrules[1353]: No rules Oct 27 23:34:21.508931 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 23:34:21.512109 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 23:34:21.518761 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 23:34:21.520021 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 23:34:21.520280 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 23:34:21.523726 systemd[1]: Starting systemd-update-done.service - Update is Completed... Oct 27 23:34:21.527591 systemd[1]: Started systemd-userdbd.service - User Database Manager. Oct 27 23:34:21.531371 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Oct 27 23:34:21.534386 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 23:34:21.534700 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 23:34:21.537584 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Oct 27 23:34:21.539881 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 23:34:21.541576 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 23:34:21.543271 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 23:34:21.543417 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 23:34:21.545740 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 23:34:21.545920 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 23:34:21.548107 systemd[1]: Finished systemd-update-done.service - Update is Completed. Oct 27 23:34:21.557465 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Oct 27 23:34:21.563350 systemd[1]: Finished ensure-sysext.service. Oct 27 23:34:21.575765 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 23:34:21.576793 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Oct 27 23:34:21.578736 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Oct 27 23:34:21.583950 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Oct 27 23:34:21.586724 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Oct 27 23:34:21.588010 systemd-resolved[1323]: Positive Trust Anchors: Oct 27 23:34:21.588038 systemd-resolved[1323]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 27 23:34:21.588070 systemd-resolved[1323]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Oct 27 23:34:21.589953 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Oct 27 23:34:21.590974 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Oct 27 23:34:21.591022 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Oct 27 23:34:21.596002 systemd[1]: Starting systemd-networkd.service - Network Configuration... Oct 27 23:34:21.598187 systemd-resolved[1323]: Defaulting to hostname 'linux'. Oct 27 23:34:21.600998 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Oct 27 23:34:21.603527 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 27 23:34:21.603939 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Oct 27 23:34:21.608066 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 27 23:34:21.608228 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Oct 27 23:34:21.609801 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 27 23:34:21.609951 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Oct 27 23:34:21.610648 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1384) Oct 27 23:34:21.612584 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 27 23:34:21.612733 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Oct 27 23:34:21.615053 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 27 23:34:21.615205 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Oct 27 23:34:21.623157 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Oct 27 23:34:21.625620 augenrules[1394]: /sbin/augenrules: No change Oct 27 23:34:21.639927 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Oct 27 23:34:21.642769 augenrules[1427]: No rules Oct 27 23:34:21.643746 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 27 23:34:21.643809 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Oct 27 23:34:21.647160 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Oct 27 23:34:21.649966 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 23:34:21.650202 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 23:34:21.656739 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Oct 27 23:34:21.672637 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Oct 27 23:34:21.678122 systemd-networkd[1405]: lo: Link UP Oct 27 23:34:21.678389 systemd-networkd[1405]: lo: Gained carrier Oct 27 23:34:21.679313 systemd-networkd[1405]: Enumeration completed Oct 27 23:34:21.679495 systemd[1]: Started systemd-networkd.service - Network Configuration. Oct 27 23:34:21.680065 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 27 23:34:21.680153 systemd-networkd[1405]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 27 23:34:21.680682 systemd-networkd[1405]: eth0: Link UP Oct 27 23:34:21.680685 systemd-networkd[1405]: eth0: Gained carrier Oct 27 23:34:21.680699 systemd-networkd[1405]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Oct 27 23:34:21.680809 systemd[1]: Reached target network.target - Network. Oct 27 23:34:21.687762 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Oct 27 23:34:21.692758 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Oct 27 23:34:21.697270 systemd-networkd[1405]: eth0: DHCPv4 address 10.0.0.77/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 27 23:34:21.699323 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Oct 27 23:34:21.700597 systemd[1]: Reached target time-set.target - System Time Set. Oct 27 23:34:21.702458 systemd-timesyncd[1406]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 27 23:34:21.702576 systemd-timesyncd[1406]: Initial clock synchronization to Mon 2025-10-27 23:34:21.338376 UTC. Oct 27 23:34:21.707613 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Oct 27 23:34:21.724862 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Oct 27 23:34:21.737058 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Oct 27 23:34:21.748741 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Oct 27 23:34:21.757507 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Oct 27 23:34:21.759576 lvm[1447]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 27 23:34:21.791614 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Oct 27 23:34:21.792914 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Oct 27 23:34:21.793968 systemd[1]: Reached target sysinit.target - System Initialization. Oct 27 23:34:21.794992 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Oct 27 23:34:21.796123 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Oct 27 23:34:21.797411 systemd[1]: Started logrotate.timer - Daily rotation of log files. Oct 27 23:34:21.798541 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Oct 27 23:34:21.799731 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Oct 27 23:34:21.800816 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 27 23:34:21.800846 systemd[1]: Reached target paths.target - Path Units. Oct 27 23:34:21.801618 systemd[1]: Reached target timers.target - Timer Units. Oct 27 23:34:21.803038 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Oct 27 23:34:21.805247 systemd[1]: Starting docker.socket - Docker Socket for the API... Oct 27 23:34:21.808211 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Oct 27 23:34:21.809614 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Oct 27 23:34:21.810688 systemd[1]: Reached target ssh-access.target - SSH Access Available. Oct 27 23:34:21.813488 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Oct 27 23:34:21.815031 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Oct 27 23:34:21.817100 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Oct 27 23:34:21.818601 systemd[1]: Listening on docker.socket - Docker Socket for the API. Oct 27 23:34:21.819616 systemd[1]: Reached target sockets.target - Socket Units. Oct 27 23:34:21.820418 systemd[1]: Reached target basic.target - Basic System. Oct 27 23:34:21.821325 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Oct 27 23:34:21.821353 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Oct 27 23:34:21.822211 systemd[1]: Starting containerd.service - containerd container runtime... Oct 27 23:34:21.823968 lvm[1454]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 27 23:34:21.824030 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Oct 27 23:34:21.826714 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Oct 27 23:34:21.829081 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Oct 27 23:34:21.830295 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Oct 27 23:34:21.833791 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Oct 27 23:34:21.835067 jq[1457]: false Oct 27 23:34:21.835629 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Oct 27 23:34:21.841698 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Oct 27 23:34:21.844797 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Oct 27 23:34:21.847924 dbus-daemon[1456]: [system] SELinux support is enabled Oct 27 23:34:21.848573 extend-filesystems[1458]: Found loop3 Oct 27 23:34:21.849265 systemd[1]: Starting systemd-logind.service - User Login Management... Oct 27 23:34:21.850966 extend-filesystems[1458]: Found loop4 Oct 27 23:34:21.850966 extend-filesystems[1458]: Found loop5 Oct 27 23:34:21.850966 extend-filesystems[1458]: Found vda Oct 27 23:34:21.850966 extend-filesystems[1458]: Found vda1 Oct 27 23:34:21.850966 extend-filesystems[1458]: Found vda2 Oct 27 23:34:21.850966 extend-filesystems[1458]: Found vda3 Oct 27 23:34:21.850966 extend-filesystems[1458]: Found usr Oct 27 23:34:21.850966 extend-filesystems[1458]: Found vda4 Oct 27 23:34:21.850966 extend-filesystems[1458]: Found vda6 Oct 27 23:34:21.850966 extend-filesystems[1458]: Found vda7 Oct 27 23:34:21.850966 extend-filesystems[1458]: Found vda9 Oct 27 23:34:21.850966 extend-filesystems[1458]: Checking size of /dev/vda9 Oct 27 23:34:21.851153 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 27 23:34:21.871857 extend-filesystems[1458]: Resized partition /dev/vda9 Oct 27 23:34:21.876646 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Oct 27 23:34:21.851697 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 27 23:34:21.876809 extend-filesystems[1478]: resize2fs 1.47.1 (20-May-2024) Oct 27 23:34:21.852765 systemd[1]: Starting update-engine.service - Update Engine... Oct 27 23:34:21.860683 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Oct 27 23:34:21.888427 update_engine[1471]: I20251027 23:34:21.884824 1471 main.cc:92] Flatcar Update Engine starting Oct 27 23:34:21.862686 systemd[1]: Started dbus.service - D-Bus System Message Bus. Oct 27 23:34:21.865602 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Oct 27 23:34:21.868996 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 27 23:34:21.869187 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Oct 27 23:34:21.869440 systemd[1]: motdgen.service: Deactivated successfully. Oct 27 23:34:21.869740 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Oct 27 23:34:21.875931 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 27 23:34:21.876152 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Oct 27 23:34:21.889844 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 27 23:34:21.889869 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Oct 27 23:34:21.891510 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 27 23:34:21.891558 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Oct 27 23:34:21.895585 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1384) Oct 27 23:34:21.895627 jq[1473]: true Oct 27 23:34:21.898572 update_engine[1471]: I20251027 23:34:21.896930 1471 update_check_scheduler.cc:74] Next update check in 5m17s Oct 27 23:34:21.903603 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Oct 27 23:34:21.905716 (ntainerd)[1490]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Oct 27 23:34:21.919644 tar[1481]: linux-arm64/LICENSE Oct 27 23:34:21.919869 jq[1491]: true Oct 27 23:34:21.920177 tar[1481]: linux-arm64/helm Oct 27 23:34:21.922770 extend-filesystems[1478]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Oct 27 23:34:21.922770 extend-filesystems[1478]: old_desc_blocks = 1, new_desc_blocks = 1 Oct 27 23:34:21.922770 extend-filesystems[1478]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Oct 27 23:34:21.926643 extend-filesystems[1458]: Resized filesystem in /dev/vda9 Oct 27 23:34:21.929042 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 27 23:34:21.929267 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Oct 27 23:34:21.934598 systemd-logind[1469]: Watching system buttons on /dev/input/event0 (Power Button) Oct 27 23:34:21.936966 systemd-logind[1469]: New seat seat0. Oct 27 23:34:21.939049 systemd[1]: Started systemd-logind.service - User Login Management. Oct 27 23:34:21.941210 systemd[1]: Started update-engine.service - Update Engine. Oct 27 23:34:21.954963 systemd[1]: Started locksmithd.service - Cluster reboot manager. Oct 27 23:34:21.970353 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Oct 27 23:34:21.972891 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Oct 27 23:34:21.975060 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Oct 27 23:34:22.010365 locksmithd[1499]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 27 23:34:22.073580 containerd[1490]: time="2025-10-27T23:34:22.072664645Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Oct 27 23:34:22.098629 containerd[1490]: time="2025-10-27T23:34:22.098556235Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 27 23:34:22.099915 containerd[1490]: time="2025-10-27T23:34:22.099878328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 27 23:34:22.099915 containerd[1490]: time="2025-10-27T23:34:22.099913721Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 27 23:34:22.099993 containerd[1490]: time="2025-10-27T23:34:22.099931780Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 27 23:34:22.100096 containerd[1490]: time="2025-10-27T23:34:22.100078926Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Oct 27 23:34:22.100131 containerd[1490]: time="2025-10-27T23:34:22.100100917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Oct 27 23:34:22.100171 containerd[1490]: time="2025-10-27T23:34:22.100156774Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Oct 27 23:34:22.100191 containerd[1490]: time="2025-10-27T23:34:22.100170710Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 27 23:34:22.100364 containerd[1490]: time="2025-10-27T23:34:22.100345765Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 27 23:34:22.100364 containerd[1490]: time="2025-10-27T23:34:22.100363748Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 27 23:34:22.100416 containerd[1490]: time="2025-10-27T23:34:22.100376882Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Oct 27 23:34:22.100416 containerd[1490]: time="2025-10-27T23:34:22.100385434Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 27 23:34:22.100463 containerd[1490]: time="2025-10-27T23:34:22.100449347Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 27 23:34:22.100666 containerd[1490]: time="2025-10-27T23:34:22.100647883Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 27 23:34:22.100790 containerd[1490]: time="2025-10-27T23:34:22.100772845Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 27 23:34:22.100790 containerd[1490]: time="2025-10-27T23:34:22.100788996Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 27 23:34:22.100875 containerd[1490]: time="2025-10-27T23:34:22.100860888Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 27 23:34:22.100915 containerd[1490]: time="2025-10-27T23:34:22.100904566Z" level=info msg="metadata content store policy set" policy=shared Oct 27 23:34:22.104084 containerd[1490]: time="2025-10-27T23:34:22.104054141Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 27 23:34:22.104200 containerd[1490]: time="2025-10-27T23:34:22.104103737Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 27 23:34:22.104200 containerd[1490]: time="2025-10-27T23:34:22.104119200Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Oct 27 23:34:22.104200 containerd[1490]: time="2025-10-27T23:34:22.104133365Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Oct 27 23:34:22.104200 containerd[1490]: time="2025-10-27T23:34:22.104147071Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 27 23:34:22.104289 containerd[1490]: time="2025-10-27T23:34:22.104268483Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 27 23:34:22.104539 containerd[1490]: time="2025-10-27T23:34:22.104522723Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 27 23:34:22.104682 containerd[1490]: time="2025-10-27T23:34:22.104661355Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Oct 27 23:34:22.104722 containerd[1490]: time="2025-10-27T23:34:22.104683919Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Oct 27 23:34:22.104722 containerd[1490]: time="2025-10-27T23:34:22.104697893Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Oct 27 23:34:22.104722 containerd[1490]: time="2025-10-27T23:34:22.104710454Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 27 23:34:22.104769 containerd[1490]: time="2025-10-27T23:34:22.104729124Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 27 23:34:22.104769 containerd[1490]: time="2025-10-27T23:34:22.104741189Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 27 23:34:22.104769 containerd[1490]: time="2025-10-27T23:34:22.104753177Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 27 23:34:22.104812 containerd[1490]: time="2025-10-27T23:34:22.104769022Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 27 23:34:22.104812 containerd[1490]: time="2025-10-27T23:34:22.104781469Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 27 23:34:22.104812 containerd[1490]: time="2025-10-27T23:34:22.104792121Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 27 23:34:22.104812 containerd[1490]: time="2025-10-27T23:34:22.104802506Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 27 23:34:22.104885 containerd[1490]: time="2025-10-27T23:34:22.104820603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.104885 containerd[1490]: time="2025-10-27T23:34:22.104840227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.104885 containerd[1490]: time="2025-10-27T23:34:22.104855461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.104885 containerd[1490]: time="2025-10-27T23:34:22.104866877Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.104885 containerd[1490]: time="2025-10-27T23:34:22.104879247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.104965 containerd[1490]: time="2025-10-27T23:34:22.104891770Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.104965 containerd[1490]: time="2025-10-27T23:34:22.104904064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.104965 containerd[1490]: time="2025-10-27T23:34:22.104915824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.104965 containerd[1490]: time="2025-10-27T23:34:22.104929836Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.104965 containerd[1490]: time="2025-10-27T23:34:22.104944879Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.104965 containerd[1490]: time="2025-10-27T23:34:22.104956829Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.105055 containerd[1490]: time="2025-10-27T23:34:22.104969275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.105055 containerd[1490]: time="2025-10-27T23:34:22.104981226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.105055 containerd[1490]: time="2025-10-27T23:34:22.104994703Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Oct 27 23:34:22.105055 containerd[1490]: time="2025-10-27T23:34:22.105013259Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.105055 containerd[1490]: time="2025-10-27T23:34:22.105026851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.105055 containerd[1490]: time="2025-10-27T23:34:22.105036510Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 27 23:34:22.105224 containerd[1490]: time="2025-10-27T23:34:22.105206831Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 27 23:34:22.105255 containerd[1490]: time="2025-10-27T23:34:22.105225730Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Oct 27 23:34:22.105255 containerd[1490]: time="2025-10-27T23:34:22.105236077Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 27 23:34:22.105255 containerd[1490]: time="2025-10-27T23:34:22.105246958Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Oct 27 23:34:22.105430 containerd[1490]: time="2025-10-27T23:34:22.105256007Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.105430 containerd[1490]: time="2025-10-27T23:34:22.105266926Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Oct 27 23:34:22.105430 containerd[1490]: time="2025-10-27T23:34:22.105275555Z" level=info msg="NRI interface is disabled by configuration." Oct 27 23:34:22.105430 containerd[1490]: time="2025-10-27T23:34:22.105284565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 27 23:34:22.105649 containerd[1490]: time="2025-10-27T23:34:22.105600657Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 27 23:34:22.105770 containerd[1490]: time="2025-10-27T23:34:22.105652123Z" level=info msg="Connect containerd service" Oct 27 23:34:22.105770 containerd[1490]: time="2025-10-27T23:34:22.105682285Z" level=info msg="using legacy CRI server" Oct 27 23:34:22.105770 containerd[1490]: time="2025-10-27T23:34:22.105688967Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Oct 27 23:34:22.105931 containerd[1490]: time="2025-10-27T23:34:22.105904645Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 27 23:34:22.106646 containerd[1490]: time="2025-10-27T23:34:22.106620404Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 27 23:34:22.106827 containerd[1490]: time="2025-10-27T23:34:22.106798360Z" level=info msg="Start subscribing containerd event" Oct 27 23:34:22.106862 containerd[1490]: time="2025-10-27T23:34:22.106844291Z" level=info msg="Start recovering state" Oct 27 23:34:22.107352 containerd[1490]: time="2025-10-27T23:34:22.106900453Z" level=info msg="Start event monitor" Oct 27 23:34:22.107352 containerd[1490]: time="2025-10-27T23:34:22.106914809Z" level=info msg="Start snapshots syncer" Oct 27 23:34:22.107352 containerd[1490]: time="2025-10-27T23:34:22.106922636Z" level=info msg="Start cni network conf syncer for default" Oct 27 23:34:22.107352 containerd[1490]: time="2025-10-27T23:34:22.106929088Z" level=info msg="Start streaming server" Oct 27 23:34:22.107501 containerd[1490]: time="2025-10-27T23:34:22.107475061Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 27 23:34:22.107521 containerd[1490]: time="2025-10-27T23:34:22.107510874Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 27 23:34:22.107602 containerd[1490]: time="2025-10-27T23:34:22.107576620Z" level=info msg="containerd successfully booted in 0.036362s" Oct 27 23:34:22.107708 systemd[1]: Started containerd.service - containerd container runtime. Oct 27 23:34:22.287328 tar[1481]: linux-arm64/README.md Oct 27 23:34:22.305615 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Oct 27 23:34:22.790447 sshd_keygen[1480]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 27 23:34:22.807346 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Oct 27 23:34:22.819819 systemd[1]: Starting issuegen.service - Generate /run/issue... Oct 27 23:34:22.825790 systemd[1]: issuegen.service: Deactivated successfully. Oct 27 23:34:22.825987 systemd[1]: Finished issuegen.service - Generate /run/issue. Oct 27 23:34:22.828300 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Oct 27 23:34:22.837716 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Oct 27 23:34:22.840370 systemd[1]: Started getty@tty1.service - Getty on tty1. Oct 27 23:34:22.842247 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Oct 27 23:34:22.843462 systemd[1]: Reached target getty.target - Login Prompts. Oct 27 23:34:22.983777 systemd-networkd[1405]: eth0: Gained IPv6LL Oct 27 23:34:22.985995 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Oct 27 23:34:22.987571 systemd[1]: Reached target network-online.target - Network is Online. Oct 27 23:34:22.999814 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Oct 27 23:34:23.002055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:34:23.003970 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Oct 27 23:34:23.016086 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 27 23:34:23.016273 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Oct 27 23:34:23.018112 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Oct 27 23:34:23.021958 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Oct 27 23:34:23.534889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:34:23.538467 systemd[1]: Reached target multi-user.target - Multi-User System. Oct 27 23:34:23.538911 (kubelet)[1570]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 23:34:23.542472 systemd[1]: Startup finished in 601ms (kernel) + 5.454s (initrd) + 3.431s (userspace) = 9.486s. Oct 27 23:34:23.882693 kubelet[1570]: E1027 23:34:23.882634 1570 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 23:34:23.885056 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 23:34:23.885216 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 23:34:23.886653 systemd[1]: kubelet.service: Consumed 752ms CPU time, 260.4M memory peak. Oct 27 23:34:27.076117 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Oct 27 23:34:27.077347 systemd[1]: Started sshd@0-10.0.0.77:22-10.0.0.1:51710.service - OpenSSH per-connection server daemon (10.0.0.1:51710). Oct 27 23:34:27.134455 sshd[1583]: Accepted publickey for core from 10.0.0.1 port 51710 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:34:27.135859 sshd-session[1583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:34:27.142875 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Oct 27 23:34:27.151900 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Oct 27 23:34:27.157154 systemd-logind[1469]: New session 1 of user core. Oct 27 23:34:27.161271 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Oct 27 23:34:27.177005 systemd[1]: Starting user@500.service - User Manager for UID 500... Oct 27 23:34:27.179518 (systemd)[1587]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 27 23:34:27.181657 systemd-logind[1469]: New session c1 of user core. Oct 27 23:34:27.278659 systemd[1587]: Queued start job for default target default.target. Oct 27 23:34:27.288584 systemd[1587]: Created slice app.slice - User Application Slice. Oct 27 23:34:27.288636 systemd[1587]: Reached target paths.target - Paths. Oct 27 23:34:27.288689 systemd[1587]: Reached target timers.target - Timers. Oct 27 23:34:27.290103 systemd[1587]: Starting dbus.socket - D-Bus User Message Bus Socket... Oct 27 23:34:27.299790 systemd[1587]: Listening on dbus.socket - D-Bus User Message Bus Socket. Oct 27 23:34:27.299864 systemd[1587]: Reached target sockets.target - Sockets. Oct 27 23:34:27.299908 systemd[1587]: Reached target basic.target - Basic System. Oct 27 23:34:27.299936 systemd[1587]: Reached target default.target - Main User Target. Oct 27 23:34:27.299962 systemd[1587]: Startup finished in 112ms. Oct 27 23:34:27.300136 systemd[1]: Started user@500.service - User Manager for UID 500. Oct 27 23:34:27.301672 systemd[1]: Started session-1.scope - Session 1 of User core. Oct 27 23:34:27.361048 systemd[1]: Started sshd@1-10.0.0.77:22-10.0.0.1:51726.service - OpenSSH per-connection server daemon (10.0.0.1:51726). Oct 27 23:34:27.409877 sshd[1598]: Accepted publickey for core from 10.0.0.1 port 51726 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:34:27.411212 sshd-session[1598]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:34:27.415716 systemd-logind[1469]: New session 2 of user core. Oct 27 23:34:27.424800 systemd[1]: Started session-2.scope - Session 2 of User core. Oct 27 23:34:27.476872 sshd[1600]: Connection closed by 10.0.0.1 port 51726 Oct 27 23:34:27.477368 sshd-session[1598]: pam_unix(sshd:session): session closed for user core Oct 27 23:34:27.488631 systemd[1]: sshd@1-10.0.0.77:22-10.0.0.1:51726.service: Deactivated successfully. Oct 27 23:34:27.490582 systemd[1]: session-2.scope: Deactivated successfully. Oct 27 23:34:27.491231 systemd-logind[1469]: Session 2 logged out. Waiting for processes to exit. Oct 27 23:34:27.500889 systemd[1]: Started sshd@2-10.0.0.77:22-10.0.0.1:51734.service - OpenSSH per-connection server daemon (10.0.0.1:51734). Oct 27 23:34:27.501480 systemd-logind[1469]: Removed session 2. Oct 27 23:34:27.537768 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 51734 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:34:27.538987 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:34:27.543030 systemd-logind[1469]: New session 3 of user core. Oct 27 23:34:27.551765 systemd[1]: Started session-3.scope - Session 3 of User core. Oct 27 23:34:27.599547 sshd[1608]: Connection closed by 10.0.0.1 port 51734 Oct 27 23:34:27.599873 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Oct 27 23:34:27.610978 systemd[1]: sshd@2-10.0.0.77:22-10.0.0.1:51734.service: Deactivated successfully. Oct 27 23:34:27.612857 systemd[1]: session-3.scope: Deactivated successfully. Oct 27 23:34:27.615718 systemd-logind[1469]: Session 3 logged out. Waiting for processes to exit. Oct 27 23:34:27.625926 systemd[1]: Started sshd@3-10.0.0.77:22-10.0.0.1:51738.service - OpenSSH per-connection server daemon (10.0.0.1:51738). Oct 27 23:34:27.627578 systemd-logind[1469]: Removed session 3. Oct 27 23:34:27.662274 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 51738 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:34:27.663523 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:34:27.668889 systemd-logind[1469]: New session 4 of user core. Oct 27 23:34:27.675768 systemd[1]: Started session-4.scope - Session 4 of User core. Oct 27 23:34:27.737787 sshd[1616]: Connection closed by 10.0.0.1 port 51738 Oct 27 23:34:27.738117 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Oct 27 23:34:27.749753 systemd[1]: sshd@3-10.0.0.77:22-10.0.0.1:51738.service: Deactivated successfully. Oct 27 23:34:27.751507 systemd[1]: session-4.scope: Deactivated successfully. Oct 27 23:34:27.753812 systemd-logind[1469]: Session 4 logged out. Waiting for processes to exit. Oct 27 23:34:27.754221 systemd[1]: Started sshd@4-10.0.0.77:22-10.0.0.1:51742.service - OpenSSH per-connection server daemon (10.0.0.1:51742). Oct 27 23:34:27.755453 systemd-logind[1469]: Removed session 4. Oct 27 23:34:27.794410 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 51742 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:34:27.795804 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:34:27.801901 systemd-logind[1469]: New session 5 of user core. Oct 27 23:34:27.814805 systemd[1]: Started session-5.scope - Session 5 of User core. Oct 27 23:34:27.870556 sudo[1625]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 27 23:34:27.870844 sudo[1625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 23:34:27.889601 sudo[1625]: pam_unix(sudo:session): session closed for user root Oct 27 23:34:27.891228 sshd[1624]: Connection closed by 10.0.0.1 port 51742 Oct 27 23:34:27.891834 sshd-session[1621]: pam_unix(sshd:session): session closed for user core Oct 27 23:34:27.901771 systemd[1]: sshd@4-10.0.0.77:22-10.0.0.1:51742.service: Deactivated successfully. Oct 27 23:34:27.903897 systemd[1]: session-5.scope: Deactivated successfully. Oct 27 23:34:27.904536 systemd-logind[1469]: Session 5 logged out. Waiting for processes to exit. Oct 27 23:34:27.910844 systemd[1]: Started sshd@5-10.0.0.77:22-10.0.0.1:51750.service - OpenSSH per-connection server daemon (10.0.0.1:51750). Oct 27 23:34:27.912623 systemd-logind[1469]: Removed session 5. Oct 27 23:34:27.947448 sshd[1630]: Accepted publickey for core from 10.0.0.1 port 51750 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:34:27.949021 sshd-session[1630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:34:27.954426 systemd-logind[1469]: New session 6 of user core. Oct 27 23:34:27.961787 systemd[1]: Started session-6.scope - Session 6 of User core. Oct 27 23:34:28.015121 sudo[1635]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 27 23:34:28.015391 sudo[1635]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 23:34:28.019532 sudo[1635]: pam_unix(sudo:session): session closed for user root Oct 27 23:34:28.024840 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Oct 27 23:34:28.025232 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 23:34:28.052202 systemd[1]: Starting audit-rules.service - Load Audit Rules... Oct 27 23:34:28.076851 augenrules[1657]: No rules Oct 27 23:34:28.078276 systemd[1]: audit-rules.service: Deactivated successfully. Oct 27 23:34:28.078502 systemd[1]: Finished audit-rules.service - Load Audit Rules. Oct 27 23:34:28.080132 sudo[1634]: pam_unix(sudo:session): session closed for user root Oct 27 23:34:28.081919 sshd[1633]: Connection closed by 10.0.0.1 port 51750 Oct 27 23:34:28.081788 sshd-session[1630]: pam_unix(sshd:session): session closed for user core Oct 27 23:34:28.099696 systemd[1]: Started sshd@6-10.0.0.77:22-10.0.0.1:51766.service - OpenSSH per-connection server daemon (10.0.0.1:51766). Oct 27 23:34:28.100127 systemd[1]: sshd@5-10.0.0.77:22-10.0.0.1:51750.service: Deactivated successfully. Oct 27 23:34:28.102921 systemd[1]: session-6.scope: Deactivated successfully. Oct 27 23:34:28.104204 systemd-logind[1469]: Session 6 logged out. Waiting for processes to exit. Oct 27 23:34:28.105479 systemd-logind[1469]: Removed session 6. Oct 27 23:34:28.158551 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 51766 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:34:28.159735 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:34:28.165673 systemd-logind[1469]: New session 7 of user core. Oct 27 23:34:28.175782 systemd[1]: Started session-7.scope - Session 7 of User core. Oct 27 23:34:28.226628 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 27 23:34:28.226933 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Oct 27 23:34:28.566850 systemd[1]: Starting docker.service - Docker Application Container Engine... Oct 27 23:34:28.566990 (dockerd)[1689]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Oct 27 23:34:28.779303 dockerd[1689]: time="2025-10-27T23:34:28.779249786Z" level=info msg="Starting up" Oct 27 23:34:28.971024 dockerd[1689]: time="2025-10-27T23:34:28.970105155Z" level=info msg="Loading containers: start." Oct 27 23:34:29.125609 kernel: Initializing XFRM netlink socket Oct 27 23:34:29.194347 systemd-networkd[1405]: docker0: Link UP Oct 27 23:34:29.236040 dockerd[1689]: time="2025-10-27T23:34:29.235865176Z" level=info msg="Loading containers: done." Oct 27 23:34:29.249161 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3907617363-merged.mount: Deactivated successfully. Oct 27 23:34:29.253255 dockerd[1689]: time="2025-10-27T23:34:29.253199285Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Oct 27 23:34:29.253349 dockerd[1689]: time="2025-10-27T23:34:29.253310069Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Oct 27 23:34:29.253525 dockerd[1689]: time="2025-10-27T23:34:29.253489249Z" level=info msg="Daemon has completed initialization" Oct 27 23:34:29.284447 dockerd[1689]: time="2025-10-27T23:34:29.284389854Z" level=info msg="API listen on /run/docker.sock" Oct 27 23:34:29.284571 systemd[1]: Started docker.service - Docker Application Container Engine. Oct 27 23:34:29.923181 containerd[1490]: time="2025-10-27T23:34:29.923133330Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Oct 27 23:34:30.516360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1274677091.mount: Deactivated successfully. Oct 27 23:34:31.880482 containerd[1490]: time="2025-10-27T23:34:31.880428468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:31.882244 containerd[1490]: time="2025-10-27T23:34:31.882216152Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Oct 27 23:34:31.884946 containerd[1490]: time="2025-10-27T23:34:31.883393344Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:31.885916 containerd[1490]: time="2025-10-27T23:34:31.885891218Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:31.887369 containerd[1490]: time="2025-10-27T23:34:31.887157889Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.963984766s" Oct 27 23:34:31.887369 containerd[1490]: time="2025-10-27T23:34:31.887197341Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Oct 27 23:34:31.887960 containerd[1490]: time="2025-10-27T23:34:31.887783925Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Oct 27 23:34:33.055161 containerd[1490]: time="2025-10-27T23:34:33.055097069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:33.056446 containerd[1490]: time="2025-10-27T23:34:33.056177433Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Oct 27 23:34:33.057302 containerd[1490]: time="2025-10-27T23:34:33.057249169Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:33.061777 containerd[1490]: time="2025-10-27T23:34:33.060614744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:33.061777 containerd[1490]: time="2025-10-27T23:34:33.061668787Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.17384626s" Oct 27 23:34:33.061777 containerd[1490]: time="2025-10-27T23:34:33.061697008Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Oct 27 23:34:33.062251 containerd[1490]: time="2025-10-27T23:34:33.062225573Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Oct 27 23:34:34.037198 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Oct 27 23:34:34.046754 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:34:34.141245 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:34:34.145234 (kubelet)[1960]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 23:34:34.181594 kubelet[1960]: E1027 23:34:34.180641 1960 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 23:34:34.183609 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 23:34:34.183757 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 23:34:34.184111 systemd[1]: kubelet.service: Consumed 134ms CPU time, 109.7M memory peak. Oct 27 23:34:34.190892 containerd[1490]: time="2025-10-27T23:34:34.190855386Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:34.193597 containerd[1490]: time="2025-10-27T23:34:34.193547960Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Oct 27 23:34:34.195585 containerd[1490]: time="2025-10-27T23:34:34.194846667Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:34.197798 containerd[1490]: time="2025-10-27T23:34:34.197761942Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:34.198796 containerd[1490]: time="2025-10-27T23:34:34.198765143Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.136506787s" Oct 27 23:34:34.198835 containerd[1490]: time="2025-10-27T23:34:34.198793877Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Oct 27 23:34:34.199333 containerd[1490]: time="2025-10-27T23:34:34.199312123Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Oct 27 23:34:35.279061 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount484768551.mount: Deactivated successfully. Oct 27 23:34:35.630724 containerd[1490]: time="2025-10-27T23:34:35.630665001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:35.631322 containerd[1490]: time="2025-10-27T23:34:35.631290227Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Oct 27 23:34:35.632368 containerd[1490]: time="2025-10-27T23:34:35.632297047Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:35.634107 containerd[1490]: time="2025-10-27T23:34:35.634064043Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:35.634789 containerd[1490]: time="2025-10-27T23:34:35.634754937Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.435412523s" Oct 27 23:34:35.634789 containerd[1490]: time="2025-10-27T23:34:35.634785094Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Oct 27 23:34:35.635351 containerd[1490]: time="2025-10-27T23:34:35.635267157Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Oct 27 23:34:36.182913 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1096082237.mount: Deactivated successfully. Oct 27 23:34:36.931697 containerd[1490]: time="2025-10-27T23:34:36.931651887Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:36.932600 containerd[1490]: time="2025-10-27T23:34:36.932344075Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Oct 27 23:34:36.933336 containerd[1490]: time="2025-10-27T23:34:36.933307587Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:36.939555 containerd[1490]: time="2025-10-27T23:34:36.939501140Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:36.940942 containerd[1490]: time="2025-10-27T23:34:36.940824270Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.305382595s" Oct 27 23:34:36.940942 containerd[1490]: time="2025-10-27T23:34:36.940857634Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Oct 27 23:34:36.941292 containerd[1490]: time="2025-10-27T23:34:36.941269881Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Oct 27 23:34:37.354100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080230257.mount: Deactivated successfully. Oct 27 23:34:37.359599 containerd[1490]: time="2025-10-27T23:34:37.359288288Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:37.360517 containerd[1490]: time="2025-10-27T23:34:37.360310374Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Oct 27 23:34:37.361322 containerd[1490]: time="2025-10-27T23:34:37.361289644Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:37.363714 containerd[1490]: time="2025-10-27T23:34:37.363671171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:37.365117 containerd[1490]: time="2025-10-27T23:34:37.364978257Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 423.67838ms" Oct 27 23:34:37.365117 containerd[1490]: time="2025-10-27T23:34:37.365010736Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Oct 27 23:34:37.365687 containerd[1490]: time="2025-10-27T23:34:37.365506473Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Oct 27 23:34:37.881883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2138206361.mount: Deactivated successfully. Oct 27 23:34:39.647401 containerd[1490]: time="2025-10-27T23:34:39.647340968Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:39.647896 containerd[1490]: time="2025-10-27T23:34:39.647840728Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Oct 27 23:34:39.648797 containerd[1490]: time="2025-10-27T23:34:39.648767073Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:39.651903 containerd[1490]: time="2025-10-27T23:34:39.651876626Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:34:39.653236 containerd[1490]: time="2025-10-27T23:34:39.653213154Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.287675105s" Oct 27 23:34:39.653289 containerd[1490]: time="2025-10-27T23:34:39.653242296Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Oct 27 23:34:44.287301 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Oct 27 23:34:44.294993 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:34:44.442124 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:34:44.445998 (kubelet)[2118]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Oct 27 23:34:44.481465 kubelet[2118]: E1027 23:34:44.481393 2118 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Oct 27 23:34:44.483954 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 27 23:34:44.484119 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 27 23:34:44.485825 systemd[1]: kubelet.service: Consumed 135ms CPU time, 105.2M memory peak. Oct 27 23:34:44.878795 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:34:44.878940 systemd[1]: kubelet.service: Consumed 135ms CPU time, 105.2M memory peak. Oct 27 23:34:44.889834 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:34:44.914961 systemd[1]: Reload requested from client PID 2133 ('systemctl') (unit session-7.scope)... Oct 27 23:34:44.914984 systemd[1]: Reloading... Oct 27 23:34:44.986714 zram_generator::config[2177]: No configuration found. Oct 27 23:34:45.112149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 27 23:34:45.185362 systemd[1]: Reloading finished in 270 ms. Oct 27 23:34:45.221910 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:34:45.224630 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:34:45.225868 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 23:34:45.226097 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:34:45.226147 systemd[1]: kubelet.service: Consumed 87ms CPU time, 95M memory peak. Oct 27 23:34:45.229891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:34:45.337830 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:34:45.342151 (kubelet)[2224]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 23:34:45.376857 kubelet[2224]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 23:34:45.376857 kubelet[2224]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 23:34:45.376857 kubelet[2224]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 23:34:45.377223 kubelet[2224]: I1027 23:34:45.376903 2224 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 23:34:45.862722 kubelet[2224]: I1027 23:34:45.860783 2224 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 27 23:34:45.862722 kubelet[2224]: I1027 23:34:45.861049 2224 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 23:34:45.862722 kubelet[2224]: I1027 23:34:45.861501 2224 server.go:954] "Client rotation is on, will bootstrap in background" Oct 27 23:34:45.883974 kubelet[2224]: E1027 23:34:45.883934 2224 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.77:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Oct 27 23:34:45.885634 kubelet[2224]: I1027 23:34:45.885459 2224 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 23:34:45.890138 kubelet[2224]: E1027 23:34:45.890102 2224 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 27 23:34:45.890138 kubelet[2224]: I1027 23:34:45.890135 2224 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 27 23:34:45.893139 kubelet[2224]: I1027 23:34:45.893109 2224 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 23:34:45.894182 kubelet[2224]: I1027 23:34:45.894081 2224 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 23:34:45.894457 kubelet[2224]: I1027 23:34:45.894269 2224 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 23:34:45.894864 kubelet[2224]: I1027 23:34:45.894842 2224 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 23:34:45.895093 kubelet[2224]: I1027 23:34:45.895078 2224 container_manager_linux.go:304] "Creating device plugin manager" Oct 27 23:34:45.895416 kubelet[2224]: I1027 23:34:45.895396 2224 state_mem.go:36] "Initialized new in-memory state store" Oct 27 23:34:45.898673 kubelet[2224]: I1027 23:34:45.898644 2224 kubelet.go:446] "Attempting to sync node with API server" Oct 27 23:34:45.898796 kubelet[2224]: I1027 23:34:45.898783 2224 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 23:34:45.898922 kubelet[2224]: I1027 23:34:45.898907 2224 kubelet.go:352] "Adding apiserver pod source" Oct 27 23:34:45.898992 kubelet[2224]: I1027 23:34:45.898973 2224 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 23:34:45.900008 kubelet[2224]: W1027 23:34:45.899924 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Oct 27 23:34:45.900008 kubelet[2224]: E1027 23:34:45.899999 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Oct 27 23:34:45.901981 kubelet[2224]: W1027 23:34:45.901927 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Oct 27 23:34:45.902099 kubelet[2224]: E1027 23:34:45.902007 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Oct 27 23:34:45.902217 kubelet[2224]: I1027 23:34:45.902196 2224 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Oct 27 23:34:45.902885 kubelet[2224]: I1027 23:34:45.902863 2224 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 27 23:34:45.903083 kubelet[2224]: W1027 23:34:45.902986 2224 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 27 23:34:45.904418 kubelet[2224]: I1027 23:34:45.904372 2224 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 23:34:45.904418 kubelet[2224]: I1027 23:34:45.904421 2224 server.go:1287] "Started kubelet" Oct 27 23:34:45.905680 kubelet[2224]: I1027 23:34:45.905620 2224 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 23:34:45.908297 kubelet[2224]: I1027 23:34:45.908261 2224 server.go:479] "Adding debug handlers to kubelet server" Oct 27 23:34:45.908482 kubelet[2224]: I1027 23:34:45.908405 2224 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 23:34:45.909257 kubelet[2224]: I1027 23:34:45.908755 2224 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 23:34:45.911479 kubelet[2224]: I1027 23:34:45.911446 2224 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 23:34:45.911723 kubelet[2224]: I1027 23:34:45.911709 2224 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 23:34:45.912187 kubelet[2224]: I1027 23:34:45.912164 2224 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 23:34:45.913423 kubelet[2224]: I1027 23:34:45.913383 2224 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 23:34:45.913560 kubelet[2224]: I1027 23:34:45.913544 2224 reconciler.go:26] "Reconciler: start to sync state" Oct 27 23:34:45.913661 kubelet[2224]: E1027 23:34:45.911555 2224 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18727d3881c9a144 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-27 23:34:45.904400708 +0000 UTC m=+0.558974804,LastTimestamp:2025-10-27 23:34:45.904400708 +0000 UTC m=+0.558974804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 27 23:34:45.913747 kubelet[2224]: E1027 23:34:45.911911 2224 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 23:34:45.914419 kubelet[2224]: E1027 23:34:45.914340 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="200ms" Oct 27 23:34:45.914772 kubelet[2224]: W1027 23:34:45.914505 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Oct 27 23:34:45.914936 kubelet[2224]: E1027 23:34:45.914883 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Oct 27 23:34:45.914936 kubelet[2224]: E1027 23:34:45.914895 2224 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 23:34:45.914936 kubelet[2224]: I1027 23:34:45.913394 2224 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 23:34:45.916427 kubelet[2224]: I1027 23:34:45.915730 2224 factory.go:221] Registration of the containerd container factory successfully Oct 27 23:34:45.916427 kubelet[2224]: I1027 23:34:45.915754 2224 factory.go:221] Registration of the systemd container factory successfully Oct 27 23:34:45.927699 kubelet[2224]: I1027 23:34:45.927649 2224 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 27 23:34:45.929020 kubelet[2224]: I1027 23:34:45.928995 2224 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 27 23:34:45.929147 kubelet[2224]: I1027 23:34:45.929135 2224 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 27 23:34:45.929208 kubelet[2224]: I1027 23:34:45.929196 2224 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 23:34:45.929250 kubelet[2224]: I1027 23:34:45.929242 2224 kubelet.go:2382] "Starting kubelet main sync loop" Oct 27 23:34:45.929353 kubelet[2224]: E1027 23:34:45.929331 2224 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 23:34:45.932762 kubelet[2224]: W1027 23:34:45.932727 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Oct 27 23:34:45.932845 kubelet[2224]: E1027 23:34:45.932778 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Oct 27 23:34:45.933223 kubelet[2224]: I1027 23:34:45.933201 2224 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 23:34:45.933223 kubelet[2224]: I1027 23:34:45.933218 2224 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 23:34:45.933295 kubelet[2224]: I1027 23:34:45.933239 2224 state_mem.go:36] "Initialized new in-memory state store" Oct 27 23:34:46.007651 kubelet[2224]: I1027 23:34:46.007601 2224 policy_none.go:49] "None policy: Start" Oct 27 23:34:46.007651 kubelet[2224]: I1027 23:34:46.007653 2224 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 23:34:46.007651 kubelet[2224]: I1027 23:34:46.007666 2224 state_mem.go:35] "Initializing new in-memory state store" Oct 27 23:34:46.013558 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Oct 27 23:34:46.013992 kubelet[2224]: E1027 23:34:46.013960 2224 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 23:34:46.028392 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Oct 27 23:34:46.030119 kubelet[2224]: E1027 23:34:46.029831 2224 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Oct 27 23:34:46.031980 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Oct 27 23:34:46.043507 kubelet[2224]: I1027 23:34:46.043459 2224 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 27 23:34:46.044038 kubelet[2224]: I1027 23:34:46.043722 2224 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 23:34:46.044038 kubelet[2224]: I1027 23:34:46.043741 2224 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 23:34:46.044038 kubelet[2224]: I1027 23:34:46.043960 2224 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 23:34:46.045033 kubelet[2224]: E1027 23:34:46.045010 2224 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 23:34:46.045114 kubelet[2224]: E1027 23:34:46.045057 2224 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Oct 27 23:34:46.116855 kubelet[2224]: E1027 23:34:46.115834 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="400ms" Oct 27 23:34:46.145854 kubelet[2224]: I1027 23:34:46.145798 2224 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 23:34:46.146327 kubelet[2224]: E1027 23:34:46.146300 2224 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Oct 27 23:34:46.240426 systemd[1]: Created slice kubepods-burstable-podefcae0686705c689802783a3e4231d7c.slice - libcontainer container kubepods-burstable-podefcae0686705c689802783a3e4231d7c.slice. Oct 27 23:34:46.251689 kubelet[2224]: E1027 23:34:46.251606 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:34:46.255484 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Oct 27 23:34:46.272219 kubelet[2224]: E1027 23:34:46.272180 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:34:46.275154 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Oct 27 23:34:46.277017 kubelet[2224]: E1027 23:34:46.276966 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:34:46.347735 kubelet[2224]: I1027 23:34:46.347692 2224 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 23:34:46.348113 kubelet[2224]: E1027 23:34:46.348070 2224 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Oct 27 23:34:46.416166 kubelet[2224]: I1027 23:34:46.415920 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:34:46.416166 kubelet[2224]: I1027 23:34:46.415957 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 27 23:34:46.416166 kubelet[2224]: I1027 23:34:46.415981 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/efcae0686705c689802783a3e4231d7c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"efcae0686705c689802783a3e4231d7c\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:34:46.416166 kubelet[2224]: I1027 23:34:46.416005 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:34:46.416166 kubelet[2224]: I1027 23:34:46.416022 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:34:46.416793 kubelet[2224]: I1027 23:34:46.416040 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:34:46.416793 kubelet[2224]: I1027 23:34:46.416057 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/efcae0686705c689802783a3e4231d7c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"efcae0686705c689802783a3e4231d7c\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:34:46.416793 kubelet[2224]: I1027 23:34:46.416131 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/efcae0686705c689802783a3e4231d7c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"efcae0686705c689802783a3e4231d7c\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:34:46.416793 kubelet[2224]: I1027 23:34:46.416161 2224 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:34:46.516699 kubelet[2224]: E1027 23:34:46.516652 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="800ms" Oct 27 23:34:46.552901 kubelet[2224]: E1027 23:34:46.552862 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:46.554012 containerd[1490]: time="2025-10-27T23:34:46.553972881Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:efcae0686705c689802783a3e4231d7c,Namespace:kube-system,Attempt:0,}" Oct 27 23:34:46.573253 kubelet[2224]: E1027 23:34:46.573214 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:46.573763 containerd[1490]: time="2025-10-27T23:34:46.573720280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Oct 27 23:34:46.578128 kubelet[2224]: E1027 23:34:46.578043 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:46.578475 containerd[1490]: time="2025-10-27T23:34:46.578442587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Oct 27 23:34:46.750740 kubelet[2224]: I1027 23:34:46.750621 2224 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 23:34:46.750967 kubelet[2224]: E1027 23:34:46.750938 2224 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Oct 27 23:34:46.865558 kubelet[2224]: E1027 23:34:46.865429 2224 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.77:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.77:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18727d3881c9a144 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-10-27 23:34:45.904400708 +0000 UTC m=+0.558974804,LastTimestamp:2025-10-27 23:34:45.904400708 +0000 UTC m=+0.558974804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Oct 27 23:34:46.963241 kubelet[2224]: W1027 23:34:46.963134 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Oct 27 23:34:46.963351 kubelet[2224]: E1027 23:34:46.963273 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.77:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Oct 27 23:34:47.107135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4050239374.mount: Deactivated successfully. Oct 27 23:34:47.108228 kubelet[2224]: W1027 23:34:47.108165 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Oct 27 23:34:47.108307 kubelet[2224]: E1027 23:34:47.108245 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.77:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Oct 27 23:34:47.115037 containerd[1490]: time="2025-10-27T23:34:47.114991575Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 23:34:47.117625 containerd[1490]: time="2025-10-27T23:34:47.117572320Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Oct 27 23:34:47.119907 containerd[1490]: time="2025-10-27T23:34:47.119555472Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 23:34:47.120676 containerd[1490]: time="2025-10-27T23:34:47.120634766Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 23:34:47.122095 containerd[1490]: time="2025-10-27T23:34:47.122060140Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 23:34:47.122521 containerd[1490]: time="2025-10-27T23:34:47.122456698Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 27 23:34:47.123446 containerd[1490]: time="2025-10-27T23:34:47.123373016Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Oct 27 23:34:47.125758 containerd[1490]: time="2025-10-27T23:34:47.125690586Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Oct 27 23:34:47.131150 containerd[1490]: time="2025-10-27T23:34:47.130677518Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.864486ms" Oct 27 23:34:47.131825 containerd[1490]: time="2025-10-27T23:34:47.131794591Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 577.736585ms" Oct 27 23:34:47.134918 containerd[1490]: time="2025-10-27T23:34:47.134871173Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.359671ms" Oct 27 23:34:47.246668 containerd[1490]: time="2025-10-27T23:34:47.244776564Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:34:47.246668 containerd[1490]: time="2025-10-27T23:34:47.244853439Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:34:47.246668 containerd[1490]: time="2025-10-27T23:34:47.244868774Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:34:47.246668 containerd[1490]: time="2025-10-27T23:34:47.244948206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:34:47.249162 containerd[1490]: time="2025-10-27T23:34:47.249041583Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:34:47.249162 containerd[1490]: time="2025-10-27T23:34:47.249097852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:34:47.249162 containerd[1490]: time="2025-10-27T23:34:47.249109194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:34:47.249162 containerd[1490]: time="2025-10-27T23:34:47.249182436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:34:47.252103 containerd[1490]: time="2025-10-27T23:34:47.251956987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:34:47.252103 containerd[1490]: time="2025-10-27T23:34:47.252023679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:34:47.252103 containerd[1490]: time="2025-10-27T23:34:47.252042409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:34:47.252364 containerd[1490]: time="2025-10-27T23:34:47.252131145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:34:47.288827 systemd[1]: Started cri-containerd-7acd46d793fc875da4c7bae0ee3dd117cb9b458f083e42222d90a2524e6e1188.scope - libcontainer container 7acd46d793fc875da4c7bae0ee3dd117cb9b458f083e42222d90a2524e6e1188. Oct 27 23:34:47.292430 systemd[1]: Started cri-containerd-c7c691c8da39f32d870e0290299c5c6685d520fed9e611e8d23f0503915a71ed.scope - libcontainer container c7c691c8da39f32d870e0290299c5c6685d520fed9e611e8d23f0503915a71ed. Oct 27 23:34:47.293523 systemd[1]: Started cri-containerd-db9c35e14d2cc29706d43b4102ade7b2ac7fc3a655ea2ff14530e401e59f7106.scope - libcontainer container db9c35e14d2cc29706d43b4102ade7b2ac7fc3a655ea2ff14530e401e59f7106. Oct 27 23:34:47.317777 kubelet[2224]: E1027 23:34:47.317725 2224 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.77:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.77:6443: connect: connection refused" interval="1.6s" Oct 27 23:34:47.331226 containerd[1490]: time="2025-10-27T23:34:47.330139060Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"7acd46d793fc875da4c7bae0ee3dd117cb9b458f083e42222d90a2524e6e1188\"" Oct 27 23:34:47.331226 containerd[1490]: time="2025-10-27T23:34:47.330412338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:efcae0686705c689802783a3e4231d7c,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7c691c8da39f32d870e0290299c5c6685d520fed9e611e8d23f0503915a71ed\"" Oct 27 23:34:47.331883 kubelet[2224]: E1027 23:34:47.331773 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:47.331883 kubelet[2224]: E1027 23:34:47.331811 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:47.335099 containerd[1490]: time="2025-10-27T23:34:47.335042847Z" level=info msg="CreateContainer within sandbox \"7acd46d793fc875da4c7bae0ee3dd117cb9b458f083e42222d90a2524e6e1188\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Oct 27 23:34:47.335174 containerd[1490]: time="2025-10-27T23:34:47.335072159Z" level=info msg="CreateContainer within sandbox \"c7c691c8da39f32d870e0290299c5c6685d520fed9e611e8d23f0503915a71ed\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Oct 27 23:34:47.341314 containerd[1490]: time="2025-10-27T23:34:47.341270451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"db9c35e14d2cc29706d43b4102ade7b2ac7fc3a655ea2ff14530e401e59f7106\"" Oct 27 23:34:47.342022 kubelet[2224]: E1027 23:34:47.341993 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:47.343478 containerd[1490]: time="2025-10-27T23:34:47.343442617Z" level=info msg="CreateContainer within sandbox \"db9c35e14d2cc29706d43b4102ade7b2ac7fc3a655ea2ff14530e401e59f7106\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Oct 27 23:34:47.366670 kubelet[2224]: W1027 23:34:47.365989 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Oct 27 23:34:47.366670 kubelet[2224]: E1027 23:34:47.366063 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.77:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Oct 27 23:34:47.402392 kubelet[2224]: W1027 23:34:47.402315 2224 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.77:6443: connect: connection refused Oct 27 23:34:47.402392 kubelet[2224]: E1027 23:34:47.402388 2224 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.77:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.77:6443: connect: connection refused" logger="UnhandledError" Oct 27 23:34:47.552704 kubelet[2224]: I1027 23:34:47.552675 2224 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 23:34:47.553094 kubelet[2224]: E1027 23:34:47.553030 2224 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.77:6443/api/v1/nodes\": dial tcp 10.0.0.77:6443: connect: connection refused" node="localhost" Oct 27 23:34:47.553555 containerd[1490]: time="2025-10-27T23:34:47.553406728Z" level=info msg="CreateContainer within sandbox \"7acd46d793fc875da4c7bae0ee3dd117cb9b458f083e42222d90a2524e6e1188\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"1f97c47649f658df42bf6d2ce02b63aba2787e5117812be4bd70f37871c8c092\"" Oct 27 23:34:47.553930 containerd[1490]: time="2025-10-27T23:34:47.553905481Z" level=info msg="StartContainer for \"1f97c47649f658df42bf6d2ce02b63aba2787e5117812be4bd70f37871c8c092\"" Oct 27 23:34:47.561963 containerd[1490]: time="2025-10-27T23:34:47.561907854Z" level=info msg="CreateContainer within sandbox \"c7c691c8da39f32d870e0290299c5c6685d520fed9e611e8d23f0503915a71ed\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"dd343fdb8163f999025ff524b02c5bfc22f00ed76467a76bb0d1d5ce3311aea3\"" Oct 27 23:34:47.564114 containerd[1490]: time="2025-10-27T23:34:47.562922693Z" level=info msg="StartContainer for \"dd343fdb8163f999025ff524b02c5bfc22f00ed76467a76bb0d1d5ce3311aea3\"" Oct 27 23:34:47.571035 containerd[1490]: time="2025-10-27T23:34:47.570973508Z" level=info msg="CreateContainer within sandbox \"db9c35e14d2cc29706d43b4102ade7b2ac7fc3a655ea2ff14530e401e59f7106\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"03b0907dcd0286e3d306a03649226087a0861b02cd7693a9f549ed5f4c2966c0\"" Oct 27 23:34:47.571608 containerd[1490]: time="2025-10-27T23:34:47.571582522Z" level=info msg="StartContainer for \"03b0907dcd0286e3d306a03649226087a0861b02cd7693a9f549ed5f4c2966c0\"" Oct 27 23:34:47.577754 systemd[1]: Started cri-containerd-1f97c47649f658df42bf6d2ce02b63aba2787e5117812be4bd70f37871c8c092.scope - libcontainer container 1f97c47649f658df42bf6d2ce02b63aba2787e5117812be4bd70f37871c8c092. Oct 27 23:34:47.593901 systemd[1]: Started cri-containerd-dd343fdb8163f999025ff524b02c5bfc22f00ed76467a76bb0d1d5ce3311aea3.scope - libcontainer container dd343fdb8163f999025ff524b02c5bfc22f00ed76467a76bb0d1d5ce3311aea3. Oct 27 23:34:47.602714 systemd[1]: Started cri-containerd-03b0907dcd0286e3d306a03649226087a0861b02cd7693a9f549ed5f4c2966c0.scope - libcontainer container 03b0907dcd0286e3d306a03649226087a0861b02cd7693a9f549ed5f4c2966c0. Oct 27 23:34:47.623989 containerd[1490]: time="2025-10-27T23:34:47.623292703Z" level=info msg="StartContainer for \"1f97c47649f658df42bf6d2ce02b63aba2787e5117812be4bd70f37871c8c092\" returns successfully" Oct 27 23:34:47.642527 containerd[1490]: time="2025-10-27T23:34:47.642488088Z" level=info msg="StartContainer for \"dd343fdb8163f999025ff524b02c5bfc22f00ed76467a76bb0d1d5ce3311aea3\" returns successfully" Oct 27 23:34:47.647142 containerd[1490]: time="2025-10-27T23:34:47.647104420Z" level=info msg="StartContainer for \"03b0907dcd0286e3d306a03649226087a0861b02cd7693a9f549ed5f4c2966c0\" returns successfully" Oct 27 23:34:47.941049 kubelet[2224]: E1027 23:34:47.940937 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:34:47.941137 kubelet[2224]: E1027 23:34:47.941078 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:47.943794 kubelet[2224]: E1027 23:34:47.943003 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:34:47.943794 kubelet[2224]: E1027 23:34:47.943122 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:47.946848 kubelet[2224]: E1027 23:34:47.946643 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:34:47.946848 kubelet[2224]: E1027 23:34:47.946771 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:48.949608 kubelet[2224]: E1027 23:34:48.949031 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:34:48.949608 kubelet[2224]: E1027 23:34:48.949150 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:34:48.949608 kubelet[2224]: E1027 23:34:48.949172 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:48.949608 kubelet[2224]: E1027 23:34:48.949252 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:48.949608 kubelet[2224]: E1027 23:34:48.949407 2224 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Oct 27 23:34:48.949608 kubelet[2224]: E1027 23:34:48.949495 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:49.085832 kubelet[2224]: E1027 23:34:49.085785 2224 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Oct 27 23:34:49.154532 kubelet[2224]: I1027 23:34:49.154494 2224 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 23:34:49.164615 kubelet[2224]: I1027 23:34:49.164558 2224 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 23:34:49.214832 kubelet[2224]: I1027 23:34:49.214466 2224 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 23:34:49.221390 kubelet[2224]: E1027 23:34:49.221355 2224 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 27 23:34:49.221773 kubelet[2224]: I1027 23:34:49.221505 2224 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 23:34:49.225853 kubelet[2224]: E1027 23:34:49.225820 2224 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Oct 27 23:34:49.225853 kubelet[2224]: I1027 23:34:49.225849 2224 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 23:34:49.228183 kubelet[2224]: E1027 23:34:49.228139 2224 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Oct 27 23:34:49.901280 kubelet[2224]: I1027 23:34:49.901203 2224 apiserver.go:52] "Watching apiserver" Oct 27 23:34:49.915699 kubelet[2224]: I1027 23:34:49.915648 2224 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 23:34:49.949272 kubelet[2224]: I1027 23:34:49.949246 2224 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 23:34:49.951176 kubelet[2224]: E1027 23:34:49.951152 2224 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Oct 27 23:34:49.951450 kubelet[2224]: E1027 23:34:49.951298 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:50.720616 kubelet[2224]: I1027 23:34:50.720549 2224 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 23:34:50.726520 kubelet[2224]: E1027 23:34:50.726416 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:50.951396 kubelet[2224]: E1027 23:34:50.951318 2224 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:51.284022 systemd[1]: Reload requested from client PID 2503 ('systemctl') (unit session-7.scope)... Oct 27 23:34:51.284036 systemd[1]: Reloading... Oct 27 23:34:51.354602 zram_generator::config[2550]: No configuration found. Oct 27 23:34:51.638093 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 27 23:34:51.722384 systemd[1]: Reloading finished in 438 ms. Oct 27 23:34:51.744451 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:34:51.752560 systemd[1]: kubelet.service: Deactivated successfully. Oct 27 23:34:51.752811 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:34:51.752873 systemd[1]: kubelet.service: Consumed 930ms CPU time, 131.4M memory peak. Oct 27 23:34:51.763961 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Oct 27 23:34:51.867493 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Oct 27 23:34:51.871414 (kubelet)[2589]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Oct 27 23:34:51.916672 kubelet[2589]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 23:34:51.916672 kubelet[2589]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Oct 27 23:34:51.916672 kubelet[2589]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 27 23:34:51.916672 kubelet[2589]: I1027 23:34:51.916133 2589 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 27 23:34:51.922247 kubelet[2589]: I1027 23:34:51.922203 2589 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Oct 27 23:34:51.922247 kubelet[2589]: I1027 23:34:51.922230 2589 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 27 23:34:51.922489 kubelet[2589]: I1027 23:34:51.922468 2589 server.go:954] "Client rotation is on, will bootstrap in background" Oct 27 23:34:51.923849 kubelet[2589]: I1027 23:34:51.923827 2589 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 27 23:34:51.926178 kubelet[2589]: I1027 23:34:51.926155 2589 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 27 23:34:51.930098 kubelet[2589]: E1027 23:34:51.929896 2589 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Oct 27 23:34:51.930098 kubelet[2589]: I1027 23:34:51.929931 2589 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Oct 27 23:34:51.934799 kubelet[2589]: I1027 23:34:51.934773 2589 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 27 23:34:51.935016 kubelet[2589]: I1027 23:34:51.934987 2589 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 27 23:34:51.935210 kubelet[2589]: I1027 23:34:51.935017 2589 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 27 23:34:51.935279 kubelet[2589]: I1027 23:34:51.935222 2589 topology_manager.go:138] "Creating topology manager with none policy" Oct 27 23:34:51.935279 kubelet[2589]: I1027 23:34:51.935232 2589 container_manager_linux.go:304] "Creating device plugin manager" Oct 27 23:34:51.935279 kubelet[2589]: I1027 23:34:51.935274 2589 state_mem.go:36] "Initialized new in-memory state store" Oct 27 23:34:51.935401 kubelet[2589]: I1027 23:34:51.935392 2589 kubelet.go:446] "Attempting to sync node with API server" Oct 27 23:34:51.935426 kubelet[2589]: I1027 23:34:51.935404 2589 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 27 23:34:51.935426 kubelet[2589]: I1027 23:34:51.935423 2589 kubelet.go:352] "Adding apiserver pod source" Oct 27 23:34:51.935506 kubelet[2589]: I1027 23:34:51.935432 2589 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 27 23:34:51.936897 kubelet[2589]: I1027 23:34:51.936869 2589 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Oct 27 23:34:51.939579 kubelet[2589]: I1027 23:34:51.937435 2589 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 27 23:34:51.939579 kubelet[2589]: I1027 23:34:51.937921 2589 watchdog_linux.go:99] "Systemd watchdog is not enabled" Oct 27 23:34:51.939579 kubelet[2589]: I1027 23:34:51.937951 2589 server.go:1287] "Started kubelet" Oct 27 23:34:51.939579 kubelet[2589]: I1027 23:34:51.939057 2589 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 27 23:34:51.939579 kubelet[2589]: I1027 23:34:51.939296 2589 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 27 23:34:51.939579 kubelet[2589]: I1027 23:34:51.939345 2589 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Oct 27 23:34:51.939749 kubelet[2589]: I1027 23:34:51.939593 2589 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 27 23:34:51.940096 kubelet[2589]: I1027 23:34:51.940077 2589 volume_manager.go:297] "Starting Kubelet Volume Manager" Oct 27 23:34:51.940188 kubelet[2589]: I1027 23:34:51.940166 2589 server.go:479] "Adding debug handlers to kubelet server" Oct 27 23:34:51.940276 kubelet[2589]: I1027 23:34:51.940258 2589 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Oct 27 23:34:51.941317 kubelet[2589]: E1027 23:34:51.941288 2589 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Oct 27 23:34:51.942620 kubelet[2589]: I1027 23:34:51.942558 2589 factory.go:221] Registration of the systemd container factory successfully Oct 27 23:34:51.942723 kubelet[2589]: I1027 23:34:51.942701 2589 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Oct 27 23:34:51.943128 kubelet[2589]: I1027 23:34:51.943105 2589 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Oct 27 23:34:51.943247 kubelet[2589]: I1027 23:34:51.943233 2589 reconciler.go:26] "Reconciler: start to sync state" Oct 27 23:34:51.944689 kubelet[2589]: E1027 23:34:51.944665 2589 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 27 23:34:51.947406 kubelet[2589]: I1027 23:34:51.947375 2589 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 27 23:34:51.950588 kubelet[2589]: I1027 23:34:51.948297 2589 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 27 23:34:51.950588 kubelet[2589]: I1027 23:34:51.948324 2589 status_manager.go:227] "Starting to sync pod status with apiserver" Oct 27 23:34:51.950588 kubelet[2589]: I1027 23:34:51.948343 2589 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Oct 27 23:34:51.950588 kubelet[2589]: I1027 23:34:51.948351 2589 kubelet.go:2382] "Starting kubelet main sync loop" Oct 27 23:34:51.950588 kubelet[2589]: E1027 23:34:51.948390 2589 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 27 23:34:51.950588 kubelet[2589]: I1027 23:34:51.949066 2589 factory.go:221] Registration of the containerd container factory successfully Oct 27 23:34:51.998008 kubelet[2589]: I1027 23:34:51.997958 2589 cpu_manager.go:221] "Starting CPU manager" policy="none" Oct 27 23:34:51.998008 kubelet[2589]: I1027 23:34:51.997986 2589 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Oct 27 23:34:51.998008 kubelet[2589]: I1027 23:34:51.998008 2589 state_mem.go:36] "Initialized new in-memory state store" Oct 27 23:34:51.998189 kubelet[2589]: I1027 23:34:51.998170 2589 state_mem.go:88] "Updated default CPUSet" cpuSet="" Oct 27 23:34:51.998225 kubelet[2589]: I1027 23:34:51.998185 2589 state_mem.go:96] "Updated CPUSet assignments" assignments={} Oct 27 23:34:51.998225 kubelet[2589]: I1027 23:34:51.998205 2589 policy_none.go:49] "None policy: Start" Oct 27 23:34:51.998225 kubelet[2589]: I1027 23:34:51.998215 2589 memory_manager.go:186] "Starting memorymanager" policy="None" Oct 27 23:34:51.998225 kubelet[2589]: I1027 23:34:51.998223 2589 state_mem.go:35] "Initializing new in-memory state store" Oct 27 23:34:51.998334 kubelet[2589]: I1027 23:34:51.998313 2589 state_mem.go:75] "Updated machine memory state" Oct 27 23:34:52.002078 kubelet[2589]: I1027 23:34:52.001639 2589 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 27 23:34:52.002078 kubelet[2589]: I1027 23:34:52.001798 2589 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 27 23:34:52.002078 kubelet[2589]: I1027 23:34:52.001809 2589 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 27 23:34:52.002078 kubelet[2589]: I1027 23:34:52.001992 2589 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 27 23:34:52.002782 kubelet[2589]: E1027 23:34:52.002761 2589 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Oct 27 23:34:52.049415 kubelet[2589]: I1027 23:34:52.049381 2589 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 23:34:52.049821 kubelet[2589]: I1027 23:34:52.049797 2589 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Oct 27 23:34:52.049871 kubelet[2589]: I1027 23:34:52.049751 2589 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Oct 27 23:34:52.055701 kubelet[2589]: E1027 23:34:52.055675 2589 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Oct 27 23:34:52.106232 kubelet[2589]: I1027 23:34:52.106180 2589 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Oct 27 23:34:52.112062 kubelet[2589]: I1027 23:34:52.112036 2589 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Oct 27 23:34:52.112148 kubelet[2589]: I1027 23:34:52.112104 2589 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Oct 27 23:34:52.245324 kubelet[2589]: I1027 23:34:52.245062 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:34:52.245324 kubelet[2589]: I1027 23:34:52.245118 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:34:52.245324 kubelet[2589]: I1027 23:34:52.245161 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Oct 27 23:34:52.245324 kubelet[2589]: I1027 23:34:52.245186 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/efcae0686705c689802783a3e4231d7c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"efcae0686705c689802783a3e4231d7c\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:34:52.245324 kubelet[2589]: I1027 23:34:52.245202 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/efcae0686705c689802783a3e4231d7c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"efcae0686705c689802783a3e4231d7c\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:34:52.245644 kubelet[2589]: I1027 23:34:52.245215 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:34:52.245644 kubelet[2589]: I1027 23:34:52.245230 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:34:52.245644 kubelet[2589]: I1027 23:34:52.245244 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Oct 27 23:34:52.245644 kubelet[2589]: I1027 23:34:52.245257 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/efcae0686705c689802783a3e4231d7c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"efcae0686705c689802783a3e4231d7c\") " pod="kube-system/kube-apiserver-localhost" Oct 27 23:34:52.282379 sudo[2625]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Oct 27 23:34:52.282668 sudo[2625]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Oct 27 23:34:52.355283 kubelet[2589]: E1027 23:34:52.355217 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:52.355283 kubelet[2589]: E1027 23:34:52.355229 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:52.356385 kubelet[2589]: E1027 23:34:52.356356 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:52.722321 sudo[2625]: pam_unix(sudo:session): session closed for user root Oct 27 23:34:52.936518 kubelet[2589]: I1027 23:34:52.936302 2589 apiserver.go:52] "Watching apiserver" Oct 27 23:34:52.944142 kubelet[2589]: I1027 23:34:52.944108 2589 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Oct 27 23:34:52.985517 kubelet[2589]: I1027 23:34:52.983677 2589 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Oct 27 23:34:52.985517 kubelet[2589]: E1027 23:34:52.983910 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:52.985927 kubelet[2589]: E1027 23:34:52.985861 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:53.061971 kubelet[2589]: E1027 23:34:53.061281 2589 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Oct 27 23:34:53.061971 kubelet[2589]: I1027 23:34:53.061231 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.061214659 podStartE2EDuration="1.061214659s" podCreationTimestamp="2025-10-27 23:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:34:53.061165135 +0000 UTC m=+1.186449496" watchObservedRunningTime="2025-10-27 23:34:53.061214659 +0000 UTC m=+1.186499020" Oct 27 23:34:53.061971 kubelet[2589]: E1027 23:34:53.061419 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:53.100186 kubelet[2589]: I1027 23:34:53.100110 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.100094694 podStartE2EDuration="1.100094694s" podCreationTimestamp="2025-10-27 23:34:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:34:53.099437731 +0000 UTC m=+1.224722092" watchObservedRunningTime="2025-10-27 23:34:53.100094694 +0000 UTC m=+1.225379055" Oct 27 23:34:53.100621 kubelet[2589]: I1027 23:34:53.100232 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.100228317 podStartE2EDuration="3.100228317s" podCreationTimestamp="2025-10-27 23:34:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:34:53.090365992 +0000 UTC m=+1.215650352" watchObservedRunningTime="2025-10-27 23:34:53.100228317 +0000 UTC m=+1.225512678" Oct 27 23:34:53.986620 kubelet[2589]: E1027 23:34:53.986585 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:53.987078 kubelet[2589]: E1027 23:34:53.986611 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:54.610061 sudo[1669]: pam_unix(sudo:session): session closed for user root Oct 27 23:34:54.611977 sshd[1668]: Connection closed by 10.0.0.1 port 51766 Oct 27 23:34:54.611602 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Oct 27 23:34:54.614562 systemd[1]: sshd@6-10.0.0.77:22-10.0.0.1:51766.service: Deactivated successfully. Oct 27 23:34:54.616324 systemd[1]: session-7.scope: Deactivated successfully. Oct 27 23:34:54.616494 systemd[1]: session-7.scope: Consumed 7.496s CPU time, 261.1M memory peak. Oct 27 23:34:54.617393 systemd-logind[1469]: Session 7 logged out. Waiting for processes to exit. Oct 27 23:34:54.618186 systemd-logind[1469]: Removed session 7. Oct 27 23:34:55.523479 kubelet[2589]: E1027 23:34:55.523408 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:57.108451 kubelet[2589]: E1027 23:34:57.108421 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:57.701219 kubelet[2589]: E1027 23:34:57.701177 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:57.991603 kubelet[2589]: E1027 23:34:57.991486 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:57.991603 kubelet[2589]: E1027 23:34:57.991540 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:58.405639 kubelet[2589]: I1027 23:34:58.405584 2589 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Oct 27 23:34:58.406042 containerd[1490]: time="2025-10-27T23:34:58.405978455Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 27 23:34:58.406896 kubelet[2589]: I1027 23:34:58.406377 2589 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Oct 27 23:34:58.993009 kubelet[2589]: E1027 23:34:58.992870 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:59.129929 systemd[1]: Created slice kubepods-besteffort-pod819052ca_8e70_44aa_b2b8_3651ecb16eb2.slice - libcontainer container kubepods-besteffort-pod819052ca_8e70_44aa_b2b8_3651ecb16eb2.slice. Oct 27 23:34:59.151981 systemd[1]: Created slice kubepods-burstable-pod800d64a0_164e_4232_a376_90c2c1bab9dc.slice - libcontainer container kubepods-burstable-pod800d64a0_164e_4232_a376_90c2c1bab9dc.slice. Oct 27 23:34:59.194939 kubelet[2589]: I1027 23:34:59.194902 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/819052ca-8e70-44aa-b2b8-3651ecb16eb2-kube-proxy\") pod \"kube-proxy-fbfch\" (UID: \"819052ca-8e70-44aa-b2b8-3651ecb16eb2\") " pod="kube-system/kube-proxy-fbfch" Oct 27 23:34:59.195166 kubelet[2589]: I1027 23:34:59.195149 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-cilium-run\") pod \"cilium-8z22g\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " pod="kube-system/cilium-8z22g" Oct 27 23:34:59.195291 kubelet[2589]: I1027 23:34:59.195264 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-xtables-lock\") pod \"cilium-8z22g\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " pod="kube-system/cilium-8z22g" Oct 27 23:34:59.195389 kubelet[2589]: I1027 23:34:59.195377 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/800d64a0-164e-4232-a376-90c2c1bab9dc-cilium-config-path\") pod \"cilium-8z22g\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " pod="kube-system/cilium-8z22g" Oct 27 23:34:59.195483 kubelet[2589]: I1027 23:34:59.195471 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/800d64a0-164e-4232-a376-90c2c1bab9dc-hubble-tls\") pod \"cilium-8z22g\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " pod="kube-system/cilium-8z22g" Oct 27 23:34:59.195616 kubelet[2589]: I1027 23:34:59.195603 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmtnx\" (UniqueName: \"kubernetes.io/projected/800d64a0-164e-4232-a376-90c2c1bab9dc-kube-api-access-dmtnx\") pod \"cilium-8z22g\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " pod="kube-system/cilium-8z22g" Oct 27 23:34:59.195737 kubelet[2589]: I1027 23:34:59.195723 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/819052ca-8e70-44aa-b2b8-3651ecb16eb2-xtables-lock\") pod \"kube-proxy-fbfch\" (UID: \"819052ca-8e70-44aa-b2b8-3651ecb16eb2\") " pod="kube-system/kube-proxy-fbfch" Oct 27 23:34:59.195846 kubelet[2589]: I1027 23:34:59.195819 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-lib-modules\") pod \"cilium-8z22g\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " pod="kube-system/cilium-8z22g" Oct 27 23:34:59.195942 kubelet[2589]: I1027 23:34:59.195903 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-host-proc-sys-kernel\") pod \"cilium-8z22g\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " pod="kube-system/cilium-8z22g" Oct 27 23:34:59.196081 kubelet[2589]: I1027 23:34:59.196016 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmm2m\" (UniqueName: \"kubernetes.io/projected/819052ca-8e70-44aa-b2b8-3651ecb16eb2-kube-api-access-pmm2m\") pod \"kube-proxy-fbfch\" (UID: \"819052ca-8e70-44aa-b2b8-3651ecb16eb2\") " pod="kube-system/kube-proxy-fbfch" Oct 27 23:34:59.196081 kubelet[2589]: I1027 23:34:59.196038 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-bpf-maps\") pod \"cilium-8z22g\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " pod="kube-system/cilium-8z22g" Oct 27 23:34:59.196081 kubelet[2589]: I1027 23:34:59.196054 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-cilium-cgroup\") pod \"cilium-8z22g\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " pod="kube-system/cilium-8z22g" Oct 27 23:34:59.196365 kubelet[2589]: I1027 23:34:59.196170 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/800d64a0-164e-4232-a376-90c2c1bab9dc-clustermesh-secrets\") pod \"cilium-8z22g\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " pod="kube-system/cilium-8z22g" Oct 27 23:34:59.196365 kubelet[2589]: I1027 23:34:59.196192 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-host-proc-sys-net\") pod \"cilium-8z22g\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " pod="kube-system/cilium-8z22g" Oct 27 23:34:59.196365 kubelet[2589]: I1027 23:34:59.196227 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-cni-path\") pod \"cilium-8z22g\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " pod="kube-system/cilium-8z22g" Oct 27 23:34:59.196365 kubelet[2589]: I1027 23:34:59.196254 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-etc-cni-netd\") pod \"cilium-8z22g\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " pod="kube-system/cilium-8z22g" Oct 27 23:34:59.196365 kubelet[2589]: I1027 23:34:59.196270 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-hostproc\") pod \"cilium-8z22g\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " pod="kube-system/cilium-8z22g" Oct 27 23:34:59.196365 kubelet[2589]: I1027 23:34:59.196294 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/819052ca-8e70-44aa-b2b8-3651ecb16eb2-lib-modules\") pod \"kube-proxy-fbfch\" (UID: \"819052ca-8e70-44aa-b2b8-3651ecb16eb2\") " pod="kube-system/kube-proxy-fbfch" Oct 27 23:34:59.446713 kubelet[2589]: E1027 23:34:59.446678 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:59.447711 containerd[1490]: time="2025-10-27T23:34:59.447317159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fbfch,Uid:819052ca-8e70-44aa-b2b8-3651ecb16eb2,Namespace:kube-system,Attempt:0,}" Oct 27 23:34:59.459085 kubelet[2589]: E1027 23:34:59.458633 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:59.460282 containerd[1490]: time="2025-10-27T23:34:59.460245361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8z22g,Uid:800d64a0-164e-4232-a376-90c2c1bab9dc,Namespace:kube-system,Attempt:0,}" Oct 27 23:34:59.473293 containerd[1490]: time="2025-10-27T23:34:59.472992643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:34:59.473293 containerd[1490]: time="2025-10-27T23:34:59.473058138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:34:59.473293 containerd[1490]: time="2025-10-27T23:34:59.473072741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:34:59.473293 containerd[1490]: time="2025-10-27T23:34:59.473158961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:34:59.490350 containerd[1490]: time="2025-10-27T23:34:59.490078746Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:34:59.490350 containerd[1490]: time="2025-10-27T23:34:59.490129877Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:34:59.490350 containerd[1490]: time="2025-10-27T23:34:59.490151042Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:34:59.490350 containerd[1490]: time="2025-10-27T23:34:59.490215536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:34:59.493764 systemd[1]: Started cri-containerd-e160ee2cadba7729df4c4779b2e4c96af70e3e7e16ef864072a207ad57b99ecd.scope - libcontainer container e160ee2cadba7729df4c4779b2e4c96af70e3e7e16ef864072a207ad57b99ecd. Oct 27 23:34:59.515409 systemd[1]: Started cri-containerd-6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf.scope - libcontainer container 6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf. Oct 27 23:34:59.524848 systemd[1]: Created slice kubepods-besteffort-podb83c2aca_9e90_405a_bc8d_1957cbc07032.slice - libcontainer container kubepods-besteffort-podb83c2aca_9e90_405a_bc8d_1957cbc07032.slice. Oct 27 23:34:59.553255 containerd[1490]: time="2025-10-27T23:34:59.552907189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fbfch,Uid:819052ca-8e70-44aa-b2b8-3651ecb16eb2,Namespace:kube-system,Attempt:0,} returns sandbox id \"e160ee2cadba7729df4c4779b2e4c96af70e3e7e16ef864072a207ad57b99ecd\"" Oct 27 23:34:59.555224 kubelet[2589]: E1027 23:34:59.555198 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:59.563587 containerd[1490]: time="2025-10-27T23:34:59.563071567Z" level=info msg="CreateContainer within sandbox \"e160ee2cadba7729df4c4779b2e4c96af70e3e7e16ef864072a207ad57b99ecd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 27 23:34:59.568177 containerd[1490]: time="2025-10-27T23:34:59.568140552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8z22g,Uid:800d64a0-164e-4232-a376-90c2c1bab9dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf\"" Oct 27 23:34:59.568984 kubelet[2589]: E1027 23:34:59.568958 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:59.570704 containerd[1490]: time="2025-10-27T23:34:59.570657601Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 27 23:34:59.580371 containerd[1490]: time="2025-10-27T23:34:59.580273415Z" level=info msg="CreateContainer within sandbox \"e160ee2cadba7729df4c4779b2e4c96af70e3e7e16ef864072a207ad57b99ecd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8a6c415c474340033e731e2134c3da3cf727085ea416753ed52257e391d2fcd4\"" Oct 27 23:34:59.581145 containerd[1490]: time="2025-10-27T23:34:59.580979415Z" level=info msg="StartContainer for \"8a6c415c474340033e731e2134c3da3cf727085ea416753ed52257e391d2fcd4\"" Oct 27 23:34:59.598663 kubelet[2589]: I1027 23:34:59.598623 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw9ml\" (UniqueName: \"kubernetes.io/projected/b83c2aca-9e90-405a-bc8d-1957cbc07032-kube-api-access-tw9ml\") pod \"cilium-operator-6c4d7847fc-pv9r8\" (UID: \"b83c2aca-9e90-405a-bc8d-1957cbc07032\") " pod="kube-system/cilium-operator-6c4d7847fc-pv9r8" Oct 27 23:34:59.598663 kubelet[2589]: I1027 23:34:59.598666 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b83c2aca-9e90-405a-bc8d-1957cbc07032-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-pv9r8\" (UID: \"b83c2aca-9e90-405a-bc8d-1957cbc07032\") " pod="kube-system/cilium-operator-6c4d7847fc-pv9r8" Oct 27 23:34:59.604733 systemd[1]: Started cri-containerd-8a6c415c474340033e731e2134c3da3cf727085ea416753ed52257e391d2fcd4.scope - libcontainer container 8a6c415c474340033e731e2134c3da3cf727085ea416753ed52257e391d2fcd4. Oct 27 23:34:59.630259 containerd[1490]: time="2025-10-27T23:34:59.630219106Z" level=info msg="StartContainer for \"8a6c415c474340033e731e2134c3da3cf727085ea416753ed52257e391d2fcd4\" returns successfully" Oct 27 23:34:59.830551 kubelet[2589]: E1027 23:34:59.830512 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:34:59.831379 containerd[1490]: time="2025-10-27T23:34:59.831204382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pv9r8,Uid:b83c2aca-9e90-405a-bc8d-1957cbc07032,Namespace:kube-system,Attempt:0,}" Oct 27 23:34:59.858433 containerd[1490]: time="2025-10-27T23:34:59.857922862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:34:59.858433 containerd[1490]: time="2025-10-27T23:34:59.858299507Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:34:59.858433 containerd[1490]: time="2025-10-27T23:34:59.858311469Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:34:59.858433 containerd[1490]: time="2025-10-27T23:34:59.858395128Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:34:59.875107 systemd[1]: Started cri-containerd-94d1e307839d0bc3d9b6561288c92f053b1bb65fe2ab6c78298b4a793ca4437e.scope - libcontainer container 94d1e307839d0bc3d9b6561288c92f053b1bb65fe2ab6c78298b4a793ca4437e. Oct 27 23:34:59.914539 containerd[1490]: time="2025-10-27T23:34:59.914498891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-pv9r8,Uid:b83c2aca-9e90-405a-bc8d-1957cbc07032,Namespace:kube-system,Attempt:0,} returns sandbox id \"94d1e307839d0bc3d9b6561288c92f053b1bb65fe2ab6c78298b4a793ca4437e\"" Oct 27 23:34:59.915311 kubelet[2589]: E1027 23:34:59.915287 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:00.012145 kubelet[2589]: E1027 23:35:00.012106 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:05.535581 kubelet[2589]: E1027 23:35:05.535528 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:05.552192 kubelet[2589]: I1027 23:35:05.551190 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fbfch" podStartSLOduration=6.551171997 podStartE2EDuration="6.551171997s" podCreationTimestamp="2025-10-27 23:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:35:00.020976836 +0000 UTC m=+8.146261197" watchObservedRunningTime="2025-10-27 23:35:05.551171997 +0000 UTC m=+13.676456358" Oct 27 23:35:06.020923 kubelet[2589]: E1027 23:35:06.020896 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:07.064065 update_engine[1471]: I20251027 23:35:07.063894 1471 update_attempter.cc:509] Updating boot flags... Oct 27 23:35:07.099607 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2967) Oct 27 23:35:07.156746 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2970) Oct 27 23:35:10.627873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190162070.mount: Deactivated successfully. Oct 27 23:35:11.951929 containerd[1490]: time="2025-10-27T23:35:11.951881460Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:35:11.953274 containerd[1490]: time="2025-10-27T23:35:11.953213783Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Oct 27 23:35:11.954712 containerd[1490]: time="2025-10-27T23:35:11.953943032Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:35:11.955814 containerd[1490]: time="2025-10-27T23:35:11.955318400Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.384609989s" Oct 27 23:35:11.955814 containerd[1490]: time="2025-10-27T23:35:11.955353324Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 27 23:35:11.959543 containerd[1490]: time="2025-10-27T23:35:11.959492710Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 27 23:35:11.964656 containerd[1490]: time="2025-10-27T23:35:11.964433033Z" level=info msg="CreateContainer within sandbox \"6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 27 23:35:12.001048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1746576683.mount: Deactivated successfully. Oct 27 23:35:12.003601 containerd[1490]: time="2025-10-27T23:35:12.001345778Z" level=info msg="CreateContainer within sandbox \"6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4\"" Oct 27 23:35:12.003601 containerd[1490]: time="2025-10-27T23:35:12.002615326Z" level=info msg="StartContainer for \"cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4\"" Oct 27 23:35:12.042782 systemd[1]: Started cri-containerd-cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4.scope - libcontainer container cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4. Oct 27 23:35:12.067741 containerd[1490]: time="2025-10-27T23:35:12.067699155Z" level=info msg="StartContainer for \"cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4\" returns successfully" Oct 27 23:35:12.085734 systemd[1]: cri-containerd-cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4.scope: Deactivated successfully. Oct 27 23:35:12.195860 containerd[1490]: time="2025-10-27T23:35:12.195796331Z" level=info msg="shim disconnected" id=cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4 namespace=k8s.io Oct 27 23:35:12.196198 containerd[1490]: time="2025-10-27T23:35:12.195988193Z" level=warning msg="cleaning up after shim disconnected" id=cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4 namespace=k8s.io Oct 27 23:35:12.196198 containerd[1490]: time="2025-10-27T23:35:12.196001195Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:35:12.998079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4-rootfs.mount: Deactivated successfully. Oct 27 23:35:13.041325 kubelet[2589]: E1027 23:35:13.041294 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:13.065716 containerd[1490]: time="2025-10-27T23:35:13.065517243Z" level=info msg="CreateContainer within sandbox \"6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 27 23:35:13.085480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2660405507.mount: Deactivated successfully. Oct 27 23:35:13.091261 containerd[1490]: time="2025-10-27T23:35:13.091219266Z" level=info msg="CreateContainer within sandbox \"6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af\"" Oct 27 23:35:13.092083 containerd[1490]: time="2025-10-27T23:35:13.091852176Z" level=info msg="StartContainer for \"db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af\"" Oct 27 23:35:13.129804 systemd[1]: Started cri-containerd-db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af.scope - libcontainer container db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af. Oct 27 23:35:13.161985 containerd[1490]: time="2025-10-27T23:35:13.161934503Z" level=info msg="StartContainer for \"db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af\" returns successfully" Oct 27 23:35:13.169464 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 27 23:35:13.169746 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Oct 27 23:35:13.170152 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Oct 27 23:35:13.177942 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Oct 27 23:35:13.178136 systemd[1]: cri-containerd-db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af.scope: Deactivated successfully. Oct 27 23:35:13.207749 containerd[1490]: time="2025-10-27T23:35:13.207698361Z" level=info msg="shim disconnected" id=db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af namespace=k8s.io Oct 27 23:35:13.208489 containerd[1490]: time="2025-10-27T23:35:13.208298307Z" level=warning msg="cleaning up after shim disconnected" id=db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af namespace=k8s.io Oct 27 23:35:13.208489 containerd[1490]: time="2025-10-27T23:35:13.208316669Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:35:13.207792 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Oct 27 23:35:13.504996 containerd[1490]: time="2025-10-27T23:35:13.504941511Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:35:13.506060 containerd[1490]: time="2025-10-27T23:35:13.506008830Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Oct 27 23:35:13.507007 containerd[1490]: time="2025-10-27T23:35:13.506972378Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Oct 27 23:35:13.508338 containerd[1490]: time="2025-10-27T23:35:13.508300286Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.548770052s" Oct 27 23:35:13.508380 containerd[1490]: time="2025-10-27T23:35:13.508336530Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 27 23:35:13.510395 containerd[1490]: time="2025-10-27T23:35:13.510365996Z" level=info msg="CreateContainer within sandbox \"94d1e307839d0bc3d9b6561288c92f053b1bb65fe2ab6c78298b4a793ca4437e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 27 23:35:13.533535 containerd[1490]: time="2025-10-27T23:35:13.533466129Z" level=info msg="CreateContainer within sandbox \"94d1e307839d0bc3d9b6561288c92f053b1bb65fe2ab6c78298b4a793ca4437e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f\"" Oct 27 23:35:13.534437 containerd[1490]: time="2025-10-27T23:35:13.534404473Z" level=info msg="StartContainer for \"e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f\"" Oct 27 23:35:13.558759 systemd[1]: Started cri-containerd-e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f.scope - libcontainer container e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f. Oct 27 23:35:13.582334 containerd[1490]: time="2025-10-27T23:35:13.582267605Z" level=info msg="StartContainer for \"e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f\" returns successfully" Oct 27 23:35:14.001985 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4062770203.mount: Deactivated successfully. Oct 27 23:35:14.044409 kubelet[2589]: E1027 23:35:14.044348 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:14.047381 kubelet[2589]: E1027 23:35:14.047354 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:14.052208 containerd[1490]: time="2025-10-27T23:35:14.052157696Z" level=info msg="CreateContainer within sandbox \"6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 27 23:35:14.075281 containerd[1490]: time="2025-10-27T23:35:14.075228273Z" level=info msg="CreateContainer within sandbox \"6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3\"" Oct 27 23:35:14.076642 containerd[1490]: time="2025-10-27T23:35:14.076310388Z" level=info msg="StartContainer for \"653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3\"" Oct 27 23:35:14.085093 kubelet[2589]: I1027 23:35:14.085021 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-pv9r8" podStartSLOduration=1.492019398 podStartE2EDuration="15.085001194s" podCreationTimestamp="2025-10-27 23:34:59 +0000 UTC" firstStartedPulling="2025-10-27 23:34:59.916135261 +0000 UTC m=+8.041419622" lastFinishedPulling="2025-10-27 23:35:13.509117057 +0000 UTC m=+21.634401418" observedRunningTime="2025-10-27 23:35:14.084828656 +0000 UTC m=+22.210113017" watchObservedRunningTime="2025-10-27 23:35:14.085001194 +0000 UTC m=+22.210285555" Oct 27 23:35:14.119800 systemd[1]: Started cri-containerd-653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3.scope - libcontainer container 653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3. Oct 27 23:35:14.192081 systemd[1]: cri-containerd-653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3.scope: Deactivated successfully. Oct 27 23:35:14.203980 containerd[1490]: time="2025-10-27T23:35:14.203924741Z" level=info msg="StartContainer for \"653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3\" returns successfully" Oct 27 23:35:14.252970 containerd[1490]: time="2025-10-27T23:35:14.252663772Z" level=info msg="shim disconnected" id=653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3 namespace=k8s.io Oct 27 23:35:14.252970 containerd[1490]: time="2025-10-27T23:35:14.252730099Z" level=warning msg="cleaning up after shim disconnected" id=653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3 namespace=k8s.io Oct 27 23:35:14.252970 containerd[1490]: time="2025-10-27T23:35:14.252748101Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:35:15.001298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3-rootfs.mount: Deactivated successfully. Oct 27 23:35:15.062344 kubelet[2589]: E1027 23:35:15.062265 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:15.062344 kubelet[2589]: E1027 23:35:15.062324 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:15.069626 containerd[1490]: time="2025-10-27T23:35:15.066327256Z" level=info msg="CreateContainer within sandbox \"6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 27 23:35:15.101574 containerd[1490]: time="2025-10-27T23:35:15.101509122Z" level=info msg="CreateContainer within sandbox \"6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d\"" Oct 27 23:35:15.102905 containerd[1490]: time="2025-10-27T23:35:15.102276041Z" level=info msg="StartContainer for \"7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d\"" Oct 27 23:35:15.134770 systemd[1]: Started cri-containerd-7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d.scope - libcontainer container 7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d. Oct 27 23:35:15.156809 systemd[1]: cri-containerd-7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d.scope: Deactivated successfully. Oct 27 23:35:15.159731 containerd[1490]: time="2025-10-27T23:35:15.159535717Z" level=info msg="StartContainer for \"7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d\" returns successfully" Oct 27 23:35:15.176315 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d-rootfs.mount: Deactivated successfully. Oct 27 23:35:15.182688 containerd[1490]: time="2025-10-27T23:35:15.182632592Z" level=info msg="shim disconnected" id=7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d namespace=k8s.io Oct 27 23:35:15.182688 containerd[1490]: time="2025-10-27T23:35:15.182686237Z" level=warning msg="cleaning up after shim disconnected" id=7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d namespace=k8s.io Oct 27 23:35:15.182688 containerd[1490]: time="2025-10-27T23:35:15.182694678Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:35:16.067618 kubelet[2589]: E1027 23:35:16.067580 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:16.070721 containerd[1490]: time="2025-10-27T23:35:16.070591054Z" level=info msg="CreateContainer within sandbox \"6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 27 23:35:16.274701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4164074393.mount: Deactivated successfully. Oct 27 23:35:16.408662 containerd[1490]: time="2025-10-27T23:35:16.408519131Z" level=info msg="CreateContainer within sandbox \"6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292\"" Oct 27 23:35:16.409461 containerd[1490]: time="2025-10-27T23:35:16.409429340Z" level=info msg="StartContainer for \"6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292\"" Oct 27 23:35:16.429144 systemd[1]: run-containerd-runc-k8s.io-6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292-runc.yTwe9O.mount: Deactivated successfully. Oct 27 23:35:16.440759 systemd[1]: Started cri-containerd-6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292.scope - libcontainer container 6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292. Oct 27 23:35:16.567445 containerd[1490]: time="2025-10-27T23:35:16.567395885Z" level=info msg="StartContainer for \"6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292\" returns successfully" Oct 27 23:35:16.667805 kubelet[2589]: I1027 23:35:16.667693 2589 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Oct 27 23:35:16.744892 systemd[1]: Created slice kubepods-burstable-pod3ed79627_3b67_4ab8_a642_bc25a675c621.slice - libcontainer container kubepods-burstable-pod3ed79627_3b67_4ab8_a642_bc25a675c621.slice. Oct 27 23:35:16.755126 systemd[1]: Created slice kubepods-burstable-pode2bfa4dc_8599_4998_a2a7_83511f822c22.slice - libcontainer container kubepods-burstable-pode2bfa4dc_8599_4998_a2a7_83511f822c22.slice. Oct 27 23:35:16.856911 kubelet[2589]: I1027 23:35:16.856828 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j2tb2\" (UniqueName: \"kubernetes.io/projected/3ed79627-3b67-4ab8-a642-bc25a675c621-kube-api-access-j2tb2\") pod \"coredns-668d6bf9bc-r622v\" (UID: \"3ed79627-3b67-4ab8-a642-bc25a675c621\") " pod="kube-system/coredns-668d6bf9bc-r622v" Oct 27 23:35:16.857679 kubelet[2589]: I1027 23:35:16.856983 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fvv2\" (UniqueName: \"kubernetes.io/projected/e2bfa4dc-8599-4998-a2a7-83511f822c22-kube-api-access-7fvv2\") pod \"coredns-668d6bf9bc-z724x\" (UID: \"e2bfa4dc-8599-4998-a2a7-83511f822c22\") " pod="kube-system/coredns-668d6bf9bc-z724x" Oct 27 23:35:16.857679 kubelet[2589]: I1027 23:35:16.857034 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3ed79627-3b67-4ab8-a642-bc25a675c621-config-volume\") pod \"coredns-668d6bf9bc-r622v\" (UID: \"3ed79627-3b67-4ab8-a642-bc25a675c621\") " pod="kube-system/coredns-668d6bf9bc-r622v" Oct 27 23:35:16.857679 kubelet[2589]: I1027 23:35:16.857085 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2bfa4dc-8599-4998-a2a7-83511f822c22-config-volume\") pod \"coredns-668d6bf9bc-z724x\" (UID: \"e2bfa4dc-8599-4998-a2a7-83511f822c22\") " pod="kube-system/coredns-668d6bf9bc-z724x" Oct 27 23:35:17.050247 kubelet[2589]: E1027 23:35:17.050202 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:17.051015 containerd[1490]: time="2025-10-27T23:35:17.050972305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r622v,Uid:3ed79627-3b67-4ab8-a642-bc25a675c621,Namespace:kube-system,Attempt:0,}" Oct 27 23:35:17.059313 kubelet[2589]: E1027 23:35:17.059278 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:17.059812 containerd[1490]: time="2025-10-27T23:35:17.059765888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z724x,Uid:e2bfa4dc-8599-4998-a2a7-83511f822c22,Namespace:kube-system,Attempt:0,}" Oct 27 23:35:17.072279 kubelet[2589]: E1027 23:35:17.072226 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:17.090534 kubelet[2589]: I1027 23:35:17.089687 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8z22g" podStartSLOduration=5.70028059 podStartE2EDuration="18.089660127s" podCreationTimestamp="2025-10-27 23:34:59 +0000 UTC" firstStartedPulling="2025-10-27 23:34:59.569866743 +0000 UTC m=+7.695151104" lastFinishedPulling="2025-10-27 23:35:11.95924628 +0000 UTC m=+20.084530641" observedRunningTime="2025-10-27 23:35:17.08926301 +0000 UTC m=+25.214547411" watchObservedRunningTime="2025-10-27 23:35:17.089660127 +0000 UTC m=+25.214944448" Oct 27 23:35:18.075598 kubelet[2589]: E1027 23:35:18.075514 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:18.325349 systemd[1]: Started sshd@7-10.0.0.77:22-10.0.0.1:40900.service - OpenSSH per-connection server daemon (10.0.0.1:40900). Oct 27 23:35:18.376261 sshd[3449]: Accepted publickey for core from 10.0.0.1 port 40900 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:35:18.377795 sshd-session[3449]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:35:18.381595 systemd-logind[1469]: New session 8 of user core. Oct 27 23:35:18.391769 systemd[1]: Started session-8.scope - Session 8 of User core. Oct 27 23:35:18.501513 systemd-networkd[1405]: cilium_host: Link UP Oct 27 23:35:18.501654 systemd-networkd[1405]: cilium_net: Link UP Oct 27 23:35:18.501782 systemd-networkd[1405]: cilium_net: Gained carrier Oct 27 23:35:18.501904 systemd-networkd[1405]: cilium_host: Gained carrier Oct 27 23:35:18.544250 sshd[3451]: Connection closed by 10.0.0.1 port 40900 Oct 27 23:35:18.544717 sshd-session[3449]: pam_unix(sshd:session): session closed for user core Oct 27 23:35:18.549450 systemd-logind[1469]: Session 8 logged out. Waiting for processes to exit. Oct 27 23:35:18.549940 systemd[1]: sshd@7-10.0.0.77:22-10.0.0.1:40900.service: Deactivated successfully. Oct 27 23:35:18.553454 systemd[1]: session-8.scope: Deactivated successfully. Oct 27 23:35:18.554770 systemd-logind[1469]: Removed session 8. Oct 27 23:35:18.592687 systemd-networkd[1405]: cilium_net: Gained IPv6LL Oct 27 23:35:18.602330 systemd-networkd[1405]: cilium_vxlan: Link UP Oct 27 23:35:18.602336 systemd-networkd[1405]: cilium_vxlan: Gained carrier Oct 27 23:35:18.868611 kernel: NET: Registered PF_ALG protocol family Oct 27 23:35:19.077206 kubelet[2589]: E1027 23:35:19.077162 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:19.175861 systemd-networkd[1405]: cilium_host: Gained IPv6LL Oct 27 23:35:19.474416 systemd-networkd[1405]: lxc_health: Link UP Oct 27 23:35:19.474658 systemd-networkd[1405]: lxc_health: Gained carrier Oct 27 23:35:19.648622 kernel: eth0: renamed from tmpbeb82 Oct 27 23:35:19.658849 systemd-networkd[1405]: lxc57d8fd547505: Link UP Oct 27 23:35:19.659380 systemd-networkd[1405]: lxc01ced5b355ee: Link UP Oct 27 23:35:19.661612 kernel: eth0: renamed from tmp602c1 Oct 27 23:35:19.669684 systemd-networkd[1405]: lxc01ced5b355ee: Gained carrier Oct 27 23:35:19.669910 systemd-networkd[1405]: lxc57d8fd547505: Gained carrier Oct 27 23:35:20.080952 kubelet[2589]: E1027 23:35:20.080893 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:20.200326 systemd-networkd[1405]: cilium_vxlan: Gained IPv6LL Oct 27 23:35:20.839724 systemd-networkd[1405]: lxc01ced5b355ee: Gained IPv6LL Oct 27 23:35:21.031786 systemd-networkd[1405]: lxc57d8fd547505: Gained IPv6LL Oct 27 23:35:21.032041 systemd-networkd[1405]: lxc_health: Gained IPv6LL Oct 27 23:35:21.083581 kubelet[2589]: E1027 23:35:21.083531 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:22.085132 kubelet[2589]: E1027 23:35:22.084756 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:23.295451 containerd[1490]: time="2025-10-27T23:35:23.295367712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:35:23.295451 containerd[1490]: time="2025-10-27T23:35:23.295426797Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:35:23.295451 containerd[1490]: time="2025-10-27T23:35:23.295441678Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:35:23.295906 containerd[1490]: time="2025-10-27T23:35:23.295535485Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:35:23.302251 containerd[1490]: time="2025-10-27T23:35:23.301917159Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:35:23.302251 containerd[1490]: time="2025-10-27T23:35:23.301993404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:35:23.302251 containerd[1490]: time="2025-10-27T23:35:23.302005845Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:35:23.302251 containerd[1490]: time="2025-10-27T23:35:23.302107853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:35:23.320762 systemd[1]: Started cri-containerd-beb8225db894a3d41bccd6710c4c67b10f81f42db307570d46914674397dd4ff.scope - libcontainer container beb8225db894a3d41bccd6710c4c67b10f81f42db307570d46914674397dd4ff. Oct 27 23:35:23.323635 systemd[1]: Started cri-containerd-602c16ea6ae8487de35bf359957969a0c34a205267c3687c58f6c9ba9e29d570.scope - libcontainer container 602c16ea6ae8487de35bf359957969a0c34a205267c3687c58f6c9ba9e29d570. Oct 27 23:35:23.335840 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 23:35:23.337932 systemd-resolved[1323]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Oct 27 23:35:23.359220 containerd[1490]: time="2025-10-27T23:35:23.359116206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-r622v,Uid:3ed79627-3b67-4ab8-a642-bc25a675c621,Namespace:kube-system,Attempt:0,} returns sandbox id \"602c16ea6ae8487de35bf359957969a0c34a205267c3687c58f6c9ba9e29d570\"" Oct 27 23:35:23.359671 containerd[1490]: time="2025-10-27T23:35:23.359645725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z724x,Uid:e2bfa4dc-8599-4998-a2a7-83511f822c22,Namespace:kube-system,Attempt:0,} returns sandbox id \"beb8225db894a3d41bccd6710c4c67b10f81f42db307570d46914674397dd4ff\"" Oct 27 23:35:23.360182 kubelet[2589]: E1027 23:35:23.360161 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:23.360474 kubelet[2589]: E1027 23:35:23.360251 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:23.362171 containerd[1490]: time="2025-10-27T23:35:23.362124869Z" level=info msg="CreateContainer within sandbox \"beb8225db894a3d41bccd6710c4c67b10f81f42db307570d46914674397dd4ff\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 23:35:23.362955 containerd[1490]: time="2025-10-27T23:35:23.362868644Z" level=info msg="CreateContainer within sandbox \"602c16ea6ae8487de35bf359957969a0c34a205267c3687c58f6c9ba9e29d570\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Oct 27 23:35:23.380759 containerd[1490]: time="2025-10-27T23:35:23.380715929Z" level=info msg="CreateContainer within sandbox \"602c16ea6ae8487de35bf359957969a0c34a205267c3687c58f6c9ba9e29d570\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9a826c16945457fe2fa6e0e5f40607a937744fc1ad9ab23b302b69552a10c41a\"" Oct 27 23:35:23.382007 containerd[1490]: time="2025-10-27T23:35:23.381450264Z" level=info msg="StartContainer for \"9a826c16945457fe2fa6e0e5f40607a937744fc1ad9ab23b302b69552a10c41a\"" Oct 27 23:35:23.382007 containerd[1490]: time="2025-10-27T23:35:23.381554072Z" level=info msg="CreateContainer within sandbox \"beb8225db894a3d41bccd6710c4c67b10f81f42db307570d46914674397dd4ff\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a195edf19e85fb0dea5a4691220494a77a5e80acc5e441f6c3f079b05c4f4528\"" Oct 27 23:35:23.382007 containerd[1490]: time="2025-10-27T23:35:23.381993944Z" level=info msg="StartContainer for \"a195edf19e85fb0dea5a4691220494a77a5e80acc5e441f6c3f079b05c4f4528\"" Oct 27 23:35:23.411759 systemd[1]: Started cri-containerd-9a826c16945457fe2fa6e0e5f40607a937744fc1ad9ab23b302b69552a10c41a.scope - libcontainer container 9a826c16945457fe2fa6e0e5f40607a937744fc1ad9ab23b302b69552a10c41a. Oct 27 23:35:23.413297 systemd[1]: Started cri-containerd-a195edf19e85fb0dea5a4691220494a77a5e80acc5e441f6c3f079b05c4f4528.scope - libcontainer container a195edf19e85fb0dea5a4691220494a77a5e80acc5e441f6c3f079b05c4f4528. Oct 27 23:35:23.443027 containerd[1490]: time="2025-10-27T23:35:23.442882985Z" level=info msg="StartContainer for \"9a826c16945457fe2fa6e0e5f40607a937744fc1ad9ab23b302b69552a10c41a\" returns successfully" Oct 27 23:35:23.447385 containerd[1490]: time="2025-10-27T23:35:23.447344677Z" level=info msg="StartContainer for \"a195edf19e85fb0dea5a4691220494a77a5e80acc5e441f6c3f079b05c4f4528\" returns successfully" Oct 27 23:35:23.556729 systemd[1]: Started sshd@8-10.0.0.77:22-10.0.0.1:54742.service - OpenSSH per-connection server daemon (10.0.0.1:54742). Oct 27 23:35:23.600155 sshd[4015]: Accepted publickey for core from 10.0.0.1 port 54742 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:35:23.601467 sshd-session[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:35:23.605781 systemd-logind[1469]: New session 9 of user core. Oct 27 23:35:23.612727 systemd[1]: Started session-9.scope - Session 9 of User core. Oct 27 23:35:23.734847 sshd[4017]: Connection closed by 10.0.0.1 port 54742 Oct 27 23:35:23.736332 sshd-session[4015]: pam_unix(sshd:session): session closed for user core Oct 27 23:35:23.740188 systemd[1]: sshd@8-10.0.0.77:22-10.0.0.1:54742.service: Deactivated successfully. Oct 27 23:35:23.741909 systemd[1]: session-9.scope: Deactivated successfully. Oct 27 23:35:23.743185 systemd-logind[1469]: Session 9 logged out. Waiting for processes to exit. Oct 27 23:35:23.744143 systemd-logind[1469]: Removed session 9. Oct 27 23:35:24.091129 kubelet[2589]: E1027 23:35:24.091077 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:24.097351 kubelet[2589]: E1027 23:35:24.097308 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:24.124768 kubelet[2589]: I1027 23:35:24.124702 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-r622v" podStartSLOduration=25.124684011 podStartE2EDuration="25.124684011s" podCreationTimestamp="2025-10-27 23:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:35:24.111725322 +0000 UTC m=+32.237009683" watchObservedRunningTime="2025-10-27 23:35:24.124684011 +0000 UTC m=+32.249968332" Oct 27 23:35:24.138760 kubelet[2589]: I1027 23:35:24.138702 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z724x" podStartSLOduration=25.138683855 podStartE2EDuration="25.138683855s" podCreationTimestamp="2025-10-27 23:34:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:35:24.138245184 +0000 UTC m=+32.263529545" watchObservedRunningTime="2025-10-27 23:35:24.138683855 +0000 UTC m=+32.263968216" Oct 27 23:35:25.098749 kubelet[2589]: E1027 23:35:25.098718 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:25.099094 kubelet[2589]: E1027 23:35:25.098762 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:26.100982 kubelet[2589]: E1027 23:35:26.100937 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:26.101525 kubelet[2589]: E1027 23:35:26.101503 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:35:28.750818 systemd[1]: Started sshd@9-10.0.0.77:22-10.0.0.1:54828.service - OpenSSH per-connection server daemon (10.0.0.1:54828). Oct 27 23:35:28.811412 sshd[4038]: Accepted publickey for core from 10.0.0.1 port 54828 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:35:28.813036 sshd-session[4038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:35:28.819730 systemd-logind[1469]: New session 10 of user core. Oct 27 23:35:28.831822 systemd[1]: Started session-10.scope - Session 10 of User core. Oct 27 23:35:28.983586 sshd[4040]: Connection closed by 10.0.0.1 port 54828 Oct 27 23:35:28.984229 sshd-session[4038]: pam_unix(sshd:session): session closed for user core Oct 27 23:35:28.988483 systemd[1]: sshd@9-10.0.0.77:22-10.0.0.1:54828.service: Deactivated successfully. Oct 27 23:35:28.992216 systemd[1]: session-10.scope: Deactivated successfully. Oct 27 23:35:28.993068 systemd-logind[1469]: Session 10 logged out. Waiting for processes to exit. Oct 27 23:35:28.994524 systemd-logind[1469]: Removed session 10. Oct 27 23:35:34.002211 systemd[1]: Started sshd@10-10.0.0.77:22-10.0.0.1:36048.service - OpenSSH per-connection server daemon (10.0.0.1:36048). Oct 27 23:35:34.052110 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 36048 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:35:34.053480 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:35:34.057289 systemd-logind[1469]: New session 11 of user core. Oct 27 23:35:34.066733 systemd[1]: Started session-11.scope - Session 11 of User core. Oct 27 23:35:34.178283 sshd[4061]: Connection closed by 10.0.0.1 port 36048 Oct 27 23:35:34.178816 sshd-session[4059]: pam_unix(sshd:session): session closed for user core Oct 27 23:35:34.193802 systemd[1]: sshd@10-10.0.0.77:22-10.0.0.1:36048.service: Deactivated successfully. Oct 27 23:35:34.195364 systemd[1]: session-11.scope: Deactivated successfully. Oct 27 23:35:34.196742 systemd-logind[1469]: Session 11 logged out. Waiting for processes to exit. Oct 27 23:35:34.203841 systemd[1]: Started sshd@11-10.0.0.77:22-10.0.0.1:36052.service - OpenSSH per-connection server daemon (10.0.0.1:36052). Oct 27 23:35:34.204719 systemd-logind[1469]: Removed session 11. Oct 27 23:35:34.242497 sshd[4075]: Accepted publickey for core from 10.0.0.1 port 36052 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:35:34.244399 sshd-session[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:35:34.251746 systemd-logind[1469]: New session 12 of user core. Oct 27 23:35:34.262062 systemd[1]: Started session-12.scope - Session 12 of User core. Oct 27 23:35:34.420198 sshd[4078]: Connection closed by 10.0.0.1 port 36052 Oct 27 23:35:34.421070 sshd-session[4075]: pam_unix(sshd:session): session closed for user core Oct 27 23:35:34.433627 systemd[1]: sshd@11-10.0.0.77:22-10.0.0.1:36052.service: Deactivated successfully. Oct 27 23:35:34.437164 systemd[1]: session-12.scope: Deactivated successfully. Oct 27 23:35:34.441002 systemd-logind[1469]: Session 12 logged out. Waiting for processes to exit. Oct 27 23:35:34.450103 systemd[1]: Started sshd@12-10.0.0.77:22-10.0.0.1:36054.service - OpenSSH per-connection server daemon (10.0.0.1:36054). Oct 27 23:35:34.451711 systemd-logind[1469]: Removed session 12. Oct 27 23:35:34.491693 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 36054 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:35:34.493342 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:35:34.497614 systemd-logind[1469]: New session 13 of user core. Oct 27 23:35:34.510769 systemd[1]: Started session-13.scope - Session 13 of User core. Oct 27 23:35:34.626817 sshd[4092]: Connection closed by 10.0.0.1 port 36054 Oct 27 23:35:34.627196 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Oct 27 23:35:34.630652 systemd[1]: sshd@12-10.0.0.77:22-10.0.0.1:36054.service: Deactivated successfully. Oct 27 23:35:34.632459 systemd[1]: session-13.scope: Deactivated successfully. Oct 27 23:35:34.633260 systemd-logind[1469]: Session 13 logged out. Waiting for processes to exit. Oct 27 23:35:34.634504 systemd-logind[1469]: Removed session 13. Oct 27 23:35:39.642818 systemd[1]: Started sshd@13-10.0.0.77:22-10.0.0.1:47380.service - OpenSSH per-connection server daemon (10.0.0.1:47380). Oct 27 23:35:39.686612 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 47380 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:35:39.687773 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:35:39.691290 systemd-logind[1469]: New session 14 of user core. Oct 27 23:35:39.703790 systemd[1]: Started session-14.scope - Session 14 of User core. Oct 27 23:35:39.820380 sshd[4108]: Connection closed by 10.0.0.1 port 47380 Oct 27 23:35:39.821767 sshd-session[4106]: pam_unix(sshd:session): session closed for user core Oct 27 23:35:39.824840 systemd[1]: sshd@13-10.0.0.77:22-10.0.0.1:47380.service: Deactivated successfully. Oct 27 23:35:39.826796 systemd[1]: session-14.scope: Deactivated successfully. Oct 27 23:35:39.827787 systemd-logind[1469]: Session 14 logged out. Waiting for processes to exit. Oct 27 23:35:39.828746 systemd-logind[1469]: Removed session 14. Oct 27 23:35:44.833151 systemd[1]: Started sshd@14-10.0.0.77:22-10.0.0.1:47390.service - OpenSSH per-connection server daemon (10.0.0.1:47390). Oct 27 23:35:44.874487 sshd[4121]: Accepted publickey for core from 10.0.0.1 port 47390 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:35:44.875887 sshd-session[4121]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:35:44.880501 systemd-logind[1469]: New session 15 of user core. Oct 27 23:35:44.892840 systemd[1]: Started session-15.scope - Session 15 of User core. Oct 27 23:35:45.002334 sshd[4123]: Connection closed by 10.0.0.1 port 47390 Oct 27 23:35:45.002827 sshd-session[4121]: pam_unix(sshd:session): session closed for user core Oct 27 23:35:45.013470 systemd[1]: sshd@14-10.0.0.77:22-10.0.0.1:47390.service: Deactivated successfully. Oct 27 23:35:45.015220 systemd[1]: session-15.scope: Deactivated successfully. Oct 27 23:35:45.016574 systemd-logind[1469]: Session 15 logged out. Waiting for processes to exit. Oct 27 23:35:45.030937 systemd[1]: Started sshd@15-10.0.0.77:22-10.0.0.1:47392.service - OpenSSH per-connection server daemon (10.0.0.1:47392). Oct 27 23:35:45.032432 systemd-logind[1469]: Removed session 15. Oct 27 23:35:45.067751 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 47392 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:35:45.069045 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:35:45.073735 systemd-logind[1469]: New session 16 of user core. Oct 27 23:35:45.080774 systemd[1]: Started session-16.scope - Session 16 of User core. Oct 27 23:35:45.263362 sshd[4138]: Connection closed by 10.0.0.1 port 47392 Oct 27 23:35:45.264272 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Oct 27 23:35:45.282080 systemd[1]: sshd@15-10.0.0.77:22-10.0.0.1:47392.service: Deactivated successfully. Oct 27 23:35:45.283970 systemd[1]: session-16.scope: Deactivated successfully. Oct 27 23:35:45.284831 systemd-logind[1469]: Session 16 logged out. Waiting for processes to exit. Oct 27 23:35:45.287322 systemd[1]: Started sshd@16-10.0.0.77:22-10.0.0.1:47408.service - OpenSSH per-connection server daemon (10.0.0.1:47408). Oct 27 23:35:45.288483 systemd-logind[1469]: Removed session 16. Oct 27 23:35:45.333990 sshd[4149]: Accepted publickey for core from 10.0.0.1 port 47408 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:35:45.335487 sshd-session[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:35:45.339962 systemd-logind[1469]: New session 17 of user core. Oct 27 23:35:45.353800 systemd[1]: Started session-17.scope - Session 17 of User core. Oct 27 23:35:45.958126 sshd[4152]: Connection closed by 10.0.0.1 port 47408 Oct 27 23:35:45.958860 sshd-session[4149]: pam_unix(sshd:session): session closed for user core Oct 27 23:35:45.969102 systemd[1]: sshd@16-10.0.0.77:22-10.0.0.1:47408.service: Deactivated successfully. Oct 27 23:35:45.974319 systemd[1]: session-17.scope: Deactivated successfully. Oct 27 23:35:45.979369 systemd-logind[1469]: Session 17 logged out. Waiting for processes to exit. Oct 27 23:35:45.986881 systemd[1]: Started sshd@17-10.0.0.77:22-10.0.0.1:47412.service - OpenSSH per-connection server daemon (10.0.0.1:47412). Oct 27 23:35:45.989878 systemd-logind[1469]: Removed session 17. Oct 27 23:35:46.040772 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 47412 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:35:46.042266 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:35:46.047317 systemd-logind[1469]: New session 18 of user core. Oct 27 23:35:46.061776 systemd[1]: Started session-18.scope - Session 18 of User core. Oct 27 23:35:46.281965 sshd[4176]: Connection closed by 10.0.0.1 port 47412 Oct 27 23:35:46.282395 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Oct 27 23:35:46.296579 systemd[1]: sshd@17-10.0.0.77:22-10.0.0.1:47412.service: Deactivated successfully. Oct 27 23:35:46.298458 systemd[1]: session-18.scope: Deactivated successfully. Oct 27 23:35:46.299378 systemd-logind[1469]: Session 18 logged out. Waiting for processes to exit. Oct 27 23:35:46.306169 systemd[1]: Started sshd@18-10.0.0.77:22-10.0.0.1:47422.service - OpenSSH per-connection server daemon (10.0.0.1:47422). Oct 27 23:35:46.307045 systemd-logind[1469]: Removed session 18. Oct 27 23:35:46.343544 sshd[4187]: Accepted publickey for core from 10.0.0.1 port 47422 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:35:46.345011 sshd-session[4187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:35:46.349037 systemd-logind[1469]: New session 19 of user core. Oct 27 23:35:46.361777 systemd[1]: Started session-19.scope - Session 19 of User core. Oct 27 23:35:46.470726 sshd[4190]: Connection closed by 10.0.0.1 port 47422 Oct 27 23:35:46.471265 sshd-session[4187]: pam_unix(sshd:session): session closed for user core Oct 27 23:35:46.474555 systemd[1]: sshd@18-10.0.0.77:22-10.0.0.1:47422.service: Deactivated successfully. Oct 27 23:35:46.476477 systemd[1]: session-19.scope: Deactivated successfully. Oct 27 23:35:46.478131 systemd-logind[1469]: Session 19 logged out. Waiting for processes to exit. Oct 27 23:35:46.479024 systemd-logind[1469]: Removed session 19. Oct 27 23:35:51.484349 systemd[1]: Started sshd@19-10.0.0.77:22-10.0.0.1:46394.service - OpenSSH per-connection server daemon (10.0.0.1:46394). Oct 27 23:35:51.539530 sshd[4205]: Accepted publickey for core from 10.0.0.1 port 46394 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:35:51.541222 sshd-session[4205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:35:51.546901 systemd-logind[1469]: New session 20 of user core. Oct 27 23:35:51.556793 systemd[1]: Started session-20.scope - Session 20 of User core. Oct 27 23:35:51.683545 sshd[4207]: Connection closed by 10.0.0.1 port 46394 Oct 27 23:35:51.683391 sshd-session[4205]: pam_unix(sshd:session): session closed for user core Oct 27 23:35:51.690094 systemd[1]: sshd@19-10.0.0.77:22-10.0.0.1:46394.service: Deactivated successfully. Oct 27 23:35:51.692261 systemd[1]: session-20.scope: Deactivated successfully. Oct 27 23:35:51.693032 systemd-logind[1469]: Session 20 logged out. Waiting for processes to exit. Oct 27 23:35:51.693923 systemd-logind[1469]: Removed session 20. Oct 27 23:35:56.695169 systemd[1]: Started sshd@20-10.0.0.77:22-10.0.0.1:46406.service - OpenSSH per-connection server daemon (10.0.0.1:46406). Oct 27 23:35:56.735200 sshd[4224]: Accepted publickey for core from 10.0.0.1 port 46406 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:35:56.736484 sshd-session[4224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:35:56.740306 systemd-logind[1469]: New session 21 of user core. Oct 27 23:35:56.747748 systemd[1]: Started session-21.scope - Session 21 of User core. Oct 27 23:35:56.856899 sshd[4226]: Connection closed by 10.0.0.1 port 46406 Oct 27 23:35:56.857445 sshd-session[4224]: pam_unix(sshd:session): session closed for user core Oct 27 23:35:56.860871 systemd[1]: sshd@20-10.0.0.77:22-10.0.0.1:46406.service: Deactivated successfully. Oct 27 23:35:56.864323 systemd[1]: session-21.scope: Deactivated successfully. Oct 27 23:35:56.864982 systemd-logind[1469]: Session 21 logged out. Waiting for processes to exit. Oct 27 23:35:56.865810 systemd-logind[1469]: Removed session 21. Oct 27 23:36:01.875479 systemd[1]: Started sshd@21-10.0.0.77:22-10.0.0.1:42678.service - OpenSSH per-connection server daemon (10.0.0.1:42678). Oct 27 23:36:01.931560 sshd[4242]: Accepted publickey for core from 10.0.0.1 port 42678 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:36:01.932917 sshd-session[4242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:36:01.936584 systemd-logind[1469]: New session 22 of user core. Oct 27 23:36:01.943747 systemd[1]: Started session-22.scope - Session 22 of User core. Oct 27 23:36:01.950311 kubelet[2589]: E1027 23:36:01.950226 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:36:02.058264 sshd[4244]: Connection closed by 10.0.0.1 port 42678 Oct 27 23:36:02.058895 sshd-session[4242]: pam_unix(sshd:session): session closed for user core Oct 27 23:36:02.062719 systemd[1]: sshd@21-10.0.0.77:22-10.0.0.1:42678.service: Deactivated successfully. Oct 27 23:36:02.065485 systemd[1]: session-22.scope: Deactivated successfully. Oct 27 23:36:02.066731 systemd-logind[1469]: Session 22 logged out. Waiting for processes to exit. Oct 27 23:36:02.067601 systemd-logind[1469]: Removed session 22. Oct 27 23:36:07.072749 systemd[1]: Started sshd@22-10.0.0.77:22-10.0.0.1:42688.service - OpenSSH per-connection server daemon (10.0.0.1:42688). Oct 27 23:36:07.114323 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 42688 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:36:07.115628 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:36:07.119299 systemd-logind[1469]: New session 23 of user core. Oct 27 23:36:07.126780 systemd[1]: Started session-23.scope - Session 23 of User core. Oct 27 23:36:07.232582 sshd[4259]: Connection closed by 10.0.0.1 port 42688 Oct 27 23:36:07.232843 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Oct 27 23:36:07.246724 systemd[1]: sshd@22-10.0.0.77:22-10.0.0.1:42688.service: Deactivated successfully. Oct 27 23:36:07.248339 systemd[1]: session-23.scope: Deactivated successfully. Oct 27 23:36:07.249651 systemd-logind[1469]: Session 23 logged out. Waiting for processes to exit. Oct 27 23:36:07.258905 systemd[1]: Started sshd@23-10.0.0.77:22-10.0.0.1:42698.service - OpenSSH per-connection server daemon (10.0.0.1:42698). Oct 27 23:36:07.260350 systemd-logind[1469]: Removed session 23. Oct 27 23:36:07.295089 sshd[4272]: Accepted publickey for core from 10.0.0.1 port 42698 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:36:07.296293 sshd-session[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:36:07.300625 systemd-logind[1469]: New session 24 of user core. Oct 27 23:36:07.306772 systemd[1]: Started session-24.scope - Session 24 of User core. Oct 27 23:36:09.132795 containerd[1490]: time="2025-10-27T23:36:09.132698133Z" level=info msg="StopContainer for \"e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f\" with timeout 30 (s)" Oct 27 23:36:09.134805 containerd[1490]: time="2025-10-27T23:36:09.134707512Z" level=info msg="Stop container \"e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f\" with signal terminated" Oct 27 23:36:09.155716 systemd[1]: cri-containerd-e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f.scope: Deactivated successfully. Oct 27 23:36:09.165694 containerd[1490]: time="2025-10-27T23:36:09.165318985Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 27 23:36:09.166404 containerd[1490]: time="2025-10-27T23:36:09.166373334Z" level=info msg="StopContainer for \"6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292\" with timeout 2 (s)" Oct 27 23:36:09.166797 containerd[1490]: time="2025-10-27T23:36:09.166709170Z" level=info msg="Stop container \"6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292\" with signal terminated" Oct 27 23:36:09.175205 systemd-networkd[1405]: lxc_health: Link DOWN Oct 27 23:36:09.175485 systemd-networkd[1405]: lxc_health: Lost carrier Oct 27 23:36:09.177230 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f-rootfs.mount: Deactivated successfully. Oct 27 23:36:09.181635 containerd[1490]: time="2025-10-27T23:36:09.181579452Z" level=info msg="shim disconnected" id=e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f namespace=k8s.io Oct 27 23:36:09.181635 containerd[1490]: time="2025-10-27T23:36:09.181632051Z" level=warning msg="cleaning up after shim disconnected" id=e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f namespace=k8s.io Oct 27 23:36:09.181635 containerd[1490]: time="2025-10-27T23:36:09.181640851Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:36:09.196353 systemd[1]: cri-containerd-6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292.scope: Deactivated successfully. Oct 27 23:36:09.196764 systemd[1]: cri-containerd-6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292.scope: Consumed 6.399s CPU time, 122.1M memory peak, 136K read from disk, 12.9M written to disk. Oct 27 23:36:09.238602 containerd[1490]: time="2025-10-27T23:36:09.238542964Z" level=info msg="StopContainer for \"e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f\" returns successfully" Oct 27 23:36:09.240030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292-rootfs.mount: Deactivated successfully. Oct 27 23:36:09.241739 containerd[1490]: time="2025-10-27T23:36:09.241708570Z" level=info msg="StopPodSandbox for \"94d1e307839d0bc3d9b6561288c92f053b1bb65fe2ab6c78298b4a793ca4437e\"" Oct 27 23:36:09.241810 containerd[1490]: time="2025-10-27T23:36:09.241764009Z" level=info msg="Container to stop \"e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 27 23:36:09.243796 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94d1e307839d0bc3d9b6561288c92f053b1bb65fe2ab6c78298b4a793ca4437e-shm.mount: Deactivated successfully. Oct 27 23:36:09.248027 systemd[1]: cri-containerd-94d1e307839d0bc3d9b6561288c92f053b1bb65fe2ab6c78298b4a793ca4437e.scope: Deactivated successfully. Oct 27 23:36:09.253471 containerd[1490]: time="2025-10-27T23:36:09.252416135Z" level=info msg="shim disconnected" id=6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292 namespace=k8s.io Oct 27 23:36:09.253471 containerd[1490]: time="2025-10-27T23:36:09.252492295Z" level=warning msg="cleaning up after shim disconnected" id=6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292 namespace=k8s.io Oct 27 23:36:09.253471 containerd[1490]: time="2025-10-27T23:36:09.252501055Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:36:09.269671 containerd[1490]: time="2025-10-27T23:36:09.269627552Z" level=info msg="StopContainer for \"6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292\" returns successfully" Oct 27 23:36:09.270246 containerd[1490]: time="2025-10-27T23:36:09.270215505Z" level=info msg="StopPodSandbox for \"6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf\"" Oct 27 23:36:09.270323 containerd[1490]: time="2025-10-27T23:36:09.270260105Z" level=info msg="Container to stop \"db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 27 23:36:09.270323 containerd[1490]: time="2025-10-27T23:36:09.270319544Z" level=info msg="Container to stop \"7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 27 23:36:09.270735 containerd[1490]: time="2025-10-27T23:36:09.270331224Z" level=info msg="Container to stop \"6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 27 23:36:09.270735 containerd[1490]: time="2025-10-27T23:36:09.270340384Z" level=info msg="Container to stop \"653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 27 23:36:09.270735 containerd[1490]: time="2025-10-27T23:36:09.270349264Z" level=info msg="Container to stop \"cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 27 23:36:09.273091 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf-shm.mount: Deactivated successfully. Oct 27 23:36:09.279172 systemd[1]: cri-containerd-6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf.scope: Deactivated successfully. Oct 27 23:36:09.280353 containerd[1490]: time="2025-10-27T23:36:09.279990241Z" level=info msg="shim disconnected" id=94d1e307839d0bc3d9b6561288c92f053b1bb65fe2ab6c78298b4a793ca4437e namespace=k8s.io Oct 27 23:36:09.280353 containerd[1490]: time="2025-10-27T23:36:09.280050680Z" level=warning msg="cleaning up after shim disconnected" id=94d1e307839d0bc3d9b6561288c92f053b1bb65fe2ab6c78298b4a793ca4437e namespace=k8s.io Oct 27 23:36:09.280353 containerd[1490]: time="2025-10-27T23:36:09.280059400Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:36:09.291624 containerd[1490]: time="2025-10-27T23:36:09.291526638Z" level=info msg="TearDown network for sandbox \"94d1e307839d0bc3d9b6561288c92f053b1bb65fe2ab6c78298b4a793ca4437e\" successfully" Oct 27 23:36:09.291624 containerd[1490]: time="2025-10-27T23:36:09.291556598Z" level=info msg="StopPodSandbox for \"94d1e307839d0bc3d9b6561288c92f053b1bb65fe2ab6c78298b4a793ca4437e\" returns successfully" Oct 27 23:36:09.317077 containerd[1490]: time="2025-10-27T23:36:09.317016366Z" level=info msg="shim disconnected" id=6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf namespace=k8s.io Oct 27 23:36:09.317077 containerd[1490]: time="2025-10-27T23:36:09.317070085Z" level=warning msg="cleaning up after shim disconnected" id=6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf namespace=k8s.io Oct 27 23:36:09.317077 containerd[1490]: time="2025-10-27T23:36:09.317078605Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:36:09.327897 containerd[1490]: time="2025-10-27T23:36:09.327846890Z" level=info msg="TearDown network for sandbox \"6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf\" successfully" Oct 27 23:36:09.327897 containerd[1490]: time="2025-10-27T23:36:09.327883610Z" level=info msg="StopPodSandbox for \"6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf\" returns successfully" Oct 27 23:36:09.389273 kubelet[2589]: I1027 23:36:09.389104 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-cni-path\") pod \"800d64a0-164e-4232-a376-90c2c1bab9dc\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " Oct 27 23:36:09.389273 kubelet[2589]: I1027 23:36:09.389158 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-etc-cni-netd\") pod \"800d64a0-164e-4232-a376-90c2c1bab9dc\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " Oct 27 23:36:09.389273 kubelet[2589]: I1027 23:36:09.389177 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-xtables-lock\") pod \"800d64a0-164e-4232-a376-90c2c1bab9dc\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " Oct 27 23:36:09.389273 kubelet[2589]: I1027 23:36:09.389198 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/800d64a0-164e-4232-a376-90c2c1bab9dc-hubble-tls\") pod \"800d64a0-164e-4232-a376-90c2c1bab9dc\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " Oct 27 23:36:09.389273 kubelet[2589]: I1027 23:36:09.389214 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-lib-modules\") pod \"800d64a0-164e-4232-a376-90c2c1bab9dc\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " Oct 27 23:36:09.389273 kubelet[2589]: I1027 23:36:09.389235 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/800d64a0-164e-4232-a376-90c2c1bab9dc-clustermesh-secrets\") pod \"800d64a0-164e-4232-a376-90c2c1bab9dc\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " Oct 27 23:36:09.389820 kubelet[2589]: I1027 23:36:09.389261 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-host-proc-sys-kernel\") pod \"800d64a0-164e-4232-a376-90c2c1bab9dc\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " Oct 27 23:36:09.389820 kubelet[2589]: I1027 23:36:09.389279 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-bpf-maps\") pod \"800d64a0-164e-4232-a376-90c2c1bab9dc\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " Oct 27 23:36:09.389820 kubelet[2589]: I1027 23:36:09.389296 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmtnx\" (UniqueName: \"kubernetes.io/projected/800d64a0-164e-4232-a376-90c2c1bab9dc-kube-api-access-dmtnx\") pod \"800d64a0-164e-4232-a376-90c2c1bab9dc\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " Oct 27 23:36:09.389820 kubelet[2589]: I1027 23:36:09.389311 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-hostproc\") pod \"800d64a0-164e-4232-a376-90c2c1bab9dc\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " Oct 27 23:36:09.389820 kubelet[2589]: I1027 23:36:09.389328 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b83c2aca-9e90-405a-bc8d-1957cbc07032-cilium-config-path\") pod \"b83c2aca-9e90-405a-bc8d-1957cbc07032\" (UID: \"b83c2aca-9e90-405a-bc8d-1957cbc07032\") " Oct 27 23:36:09.389820 kubelet[2589]: I1027 23:36:09.389343 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tw9ml\" (UniqueName: \"kubernetes.io/projected/b83c2aca-9e90-405a-bc8d-1957cbc07032-kube-api-access-tw9ml\") pod \"b83c2aca-9e90-405a-bc8d-1957cbc07032\" (UID: \"b83c2aca-9e90-405a-bc8d-1957cbc07032\") " Oct 27 23:36:09.389963 kubelet[2589]: I1027 23:36:09.389363 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/800d64a0-164e-4232-a376-90c2c1bab9dc-cilium-config-path\") pod \"800d64a0-164e-4232-a376-90c2c1bab9dc\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " Oct 27 23:36:09.389963 kubelet[2589]: I1027 23:36:09.389379 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-cilium-run\") pod \"800d64a0-164e-4232-a376-90c2c1bab9dc\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " Oct 27 23:36:09.389963 kubelet[2589]: I1027 23:36:09.389393 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-cilium-cgroup\") pod \"800d64a0-164e-4232-a376-90c2c1bab9dc\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " Oct 27 23:36:09.389963 kubelet[2589]: I1027 23:36:09.389408 2589 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-host-proc-sys-net\") pod \"800d64a0-164e-4232-a376-90c2c1bab9dc\" (UID: \"800d64a0-164e-4232-a376-90c2c1bab9dc\") " Oct 27 23:36:09.389963 kubelet[2589]: I1027 23:36:09.389935 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-cni-path" (OuterVolumeSpecName: "cni-path") pod "800d64a0-164e-4232-a376-90c2c1bab9dc" (UID: "800d64a0-164e-4232-a376-90c2c1bab9dc"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:36:09.390363 kubelet[2589]: I1027 23:36:09.390118 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "800d64a0-164e-4232-a376-90c2c1bab9dc" (UID: "800d64a0-164e-4232-a376-90c2c1bab9dc"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:36:09.390497 kubelet[2589]: I1027 23:36:09.390454 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "800d64a0-164e-4232-a376-90c2c1bab9dc" (UID: "800d64a0-164e-4232-a376-90c2c1bab9dc"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:36:09.390497 kubelet[2589]: I1027 23:36:09.390480 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "800d64a0-164e-4232-a376-90c2c1bab9dc" (UID: "800d64a0-164e-4232-a376-90c2c1bab9dc"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:36:09.392542 kubelet[2589]: I1027 23:36:09.392344 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/800d64a0-164e-4232-a376-90c2c1bab9dc-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "800d64a0-164e-4232-a376-90c2c1bab9dc" (UID: "800d64a0-164e-4232-a376-90c2c1bab9dc"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 27 23:36:09.392542 kubelet[2589]: I1027 23:36:09.392396 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "800d64a0-164e-4232-a376-90c2c1bab9dc" (UID: "800d64a0-164e-4232-a376-90c2c1bab9dc"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:36:09.392542 kubelet[2589]: I1027 23:36:09.392435 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "800d64a0-164e-4232-a376-90c2c1bab9dc" (UID: "800d64a0-164e-4232-a376-90c2c1bab9dc"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:36:09.393049 kubelet[2589]: I1027 23:36:09.392946 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800d64a0-164e-4232-a376-90c2c1bab9dc-kube-api-access-dmtnx" (OuterVolumeSpecName: "kube-api-access-dmtnx") pod "800d64a0-164e-4232-a376-90c2c1bab9dc" (UID: "800d64a0-164e-4232-a376-90c2c1bab9dc"). InnerVolumeSpecName "kube-api-access-dmtnx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 23:36:09.393049 kubelet[2589]: I1027 23:36:09.392992 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "800d64a0-164e-4232-a376-90c2c1bab9dc" (UID: "800d64a0-164e-4232-a376-90c2c1bab9dc"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:36:09.393049 kubelet[2589]: I1027 23:36:09.393008 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "800d64a0-164e-4232-a376-90c2c1bab9dc" (UID: "800d64a0-164e-4232-a376-90c2c1bab9dc"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:36:09.393049 kubelet[2589]: I1027 23:36:09.393024 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-hostproc" (OuterVolumeSpecName: "hostproc") pod "800d64a0-164e-4232-a376-90c2c1bab9dc" (UID: "800d64a0-164e-4232-a376-90c2c1bab9dc"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:36:09.393218 kubelet[2589]: I1027 23:36:09.393176 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/800d64a0-164e-4232-a376-90c2c1bab9dc-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "800d64a0-164e-4232-a376-90c2c1bab9dc" (UID: "800d64a0-164e-4232-a376-90c2c1bab9dc"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Oct 27 23:36:09.393254 kubelet[2589]: I1027 23:36:09.393099 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "800d64a0-164e-4232-a376-90c2c1bab9dc" (UID: "800d64a0-164e-4232-a376-90c2c1bab9dc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Oct 27 23:36:09.393613 kubelet[2589]: I1027 23:36:09.393585 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b83c2aca-9e90-405a-bc8d-1957cbc07032-kube-api-access-tw9ml" (OuterVolumeSpecName: "kube-api-access-tw9ml") pod "b83c2aca-9e90-405a-bc8d-1957cbc07032" (UID: "b83c2aca-9e90-405a-bc8d-1957cbc07032"). InnerVolumeSpecName "kube-api-access-tw9ml". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 23:36:09.394750 kubelet[2589]: I1027 23:36:09.394728 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b83c2aca-9e90-405a-bc8d-1957cbc07032-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b83c2aca-9e90-405a-bc8d-1957cbc07032" (UID: "b83c2aca-9e90-405a-bc8d-1957cbc07032"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Oct 27 23:36:09.400068 kubelet[2589]: I1027 23:36:09.400031 2589 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/800d64a0-164e-4232-a376-90c2c1bab9dc-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "800d64a0-164e-4232-a376-90c2c1bab9dc" (UID: "800d64a0-164e-4232-a376-90c2c1bab9dc"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Oct 27 23:36:09.489937 kubelet[2589]: I1027 23:36:09.489881 2589 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/800d64a0-164e-4232-a376-90c2c1bab9dc-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.489937 kubelet[2589]: I1027 23:36:09.489922 2589 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.489937 kubelet[2589]: I1027 23:36:09.489934 2589 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-bpf-maps\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.489937 kubelet[2589]: I1027 23:36:09.489942 2589 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dmtnx\" (UniqueName: \"kubernetes.io/projected/800d64a0-164e-4232-a376-90c2c1bab9dc-kube-api-access-dmtnx\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.489937 kubelet[2589]: I1027 23:36:09.489952 2589 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b83c2aca-9e90-405a-bc8d-1957cbc07032-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.490159 kubelet[2589]: I1027 23:36:09.489960 2589 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tw9ml\" (UniqueName: \"kubernetes.io/projected/b83c2aca-9e90-405a-bc8d-1957cbc07032-kube-api-access-tw9ml\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.490159 kubelet[2589]: I1027 23:36:09.489968 2589 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-hostproc\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.490159 kubelet[2589]: I1027 23:36:09.489975 2589 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/800d64a0-164e-4232-a376-90c2c1bab9dc-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.490159 kubelet[2589]: I1027 23:36:09.489983 2589 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.490159 kubelet[2589]: I1027 23:36:09.489990 2589 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.490159 kubelet[2589]: I1027 23:36:09.489998 2589 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-cilium-run\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.490159 kubelet[2589]: I1027 23:36:09.490005 2589 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-cni-path\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.490159 kubelet[2589]: I1027 23:36:09.490012 2589 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.490310 kubelet[2589]: I1027 23:36:09.490018 2589 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/800d64a0-164e-4232-a376-90c2c1bab9dc-hubble-tls\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.490310 kubelet[2589]: I1027 23:36:09.490025 2589 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-lib-modules\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.490310 kubelet[2589]: I1027 23:36:09.490032 2589 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/800d64a0-164e-4232-a376-90c2c1bab9dc-xtables-lock\") on node \"localhost\" DevicePath \"\"" Oct 27 23:36:09.956961 systemd[1]: Removed slice kubepods-besteffort-podb83c2aca_9e90_405a_bc8d_1957cbc07032.slice - libcontainer container kubepods-besteffort-podb83c2aca_9e90_405a_bc8d_1957cbc07032.slice. Oct 27 23:36:09.959922 systemd[1]: Removed slice kubepods-burstable-pod800d64a0_164e_4232_a376_90c2c1bab9dc.slice - libcontainer container kubepods-burstable-pod800d64a0_164e_4232_a376_90c2c1bab9dc.slice. Oct 27 23:36:09.960217 systemd[1]: kubepods-burstable-pod800d64a0_164e_4232_a376_90c2c1bab9dc.slice: Consumed 6.483s CPU time, 122.4M memory peak, 144K read from disk, 12.9M written to disk. Oct 27 23:36:10.128135 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-94d1e307839d0bc3d9b6561288c92f053b1bb65fe2ab6c78298b4a793ca4437e-rootfs.mount: Deactivated successfully. Oct 27 23:36:10.128247 systemd[1]: var-lib-kubelet-pods-b83c2aca\x2d9e90\x2d405a\x2dbc8d\x2d1957cbc07032-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtw9ml.mount: Deactivated successfully. Oct 27 23:36:10.128309 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aaf9906c29144d1d551c48a9bd6c44000ea8256ce2c17ebe21c70497522ebaf-rootfs.mount: Deactivated successfully. Oct 27 23:36:10.128365 systemd[1]: var-lib-kubelet-pods-800d64a0\x2d164e\x2d4232\x2da376\x2d90c2c1bab9dc-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddmtnx.mount: Deactivated successfully. Oct 27 23:36:10.128418 systemd[1]: var-lib-kubelet-pods-800d64a0\x2d164e\x2d4232\x2da376\x2d90c2c1bab9dc-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 27 23:36:10.128469 systemd[1]: var-lib-kubelet-pods-800d64a0\x2d164e\x2d4232\x2da376\x2d90c2c1bab9dc-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 27 23:36:10.204182 kubelet[2589]: I1027 23:36:10.204132 2589 scope.go:117] "RemoveContainer" containerID="6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292" Oct 27 23:36:10.209779 containerd[1490]: time="2025-10-27T23:36:10.207822778Z" level=info msg="RemoveContainer for \"6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292\"" Oct 27 23:36:10.211846 containerd[1490]: time="2025-10-27T23:36:10.211805940Z" level=info msg="RemoveContainer for \"6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292\" returns successfully" Oct 27 23:36:10.212588 kubelet[2589]: I1027 23:36:10.212541 2589 scope.go:117] "RemoveContainer" containerID="7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d" Oct 27 23:36:10.215091 containerd[1490]: time="2025-10-27T23:36:10.215040390Z" level=info msg="RemoveContainer for \"7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d\"" Oct 27 23:36:10.224920 containerd[1490]: time="2025-10-27T23:36:10.224859696Z" level=info msg="RemoveContainer for \"7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d\" returns successfully" Oct 27 23:36:10.226118 kubelet[2589]: I1027 23:36:10.225948 2589 scope.go:117] "RemoveContainer" containerID="653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3" Oct 27 23:36:10.227394 containerd[1490]: time="2025-10-27T23:36:10.227309553Z" level=info msg="RemoveContainer for \"653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3\"" Oct 27 23:36:10.230201 containerd[1490]: time="2025-10-27T23:36:10.230156806Z" level=info msg="RemoveContainer for \"653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3\" returns successfully" Oct 27 23:36:10.230582 kubelet[2589]: I1027 23:36:10.230509 2589 scope.go:117] "RemoveContainer" containerID="db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af" Oct 27 23:36:10.231499 containerd[1490]: time="2025-10-27T23:36:10.231473273Z" level=info msg="RemoveContainer for \"db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af\"" Oct 27 23:36:10.237779 containerd[1490]: time="2025-10-27T23:36:10.234881721Z" level=info msg="RemoveContainer for \"db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af\" returns successfully" Oct 27 23:36:10.237986 kubelet[2589]: I1027 23:36:10.236758 2589 scope.go:117] "RemoveContainer" containerID="cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4" Oct 27 23:36:10.241925 containerd[1490]: time="2025-10-27T23:36:10.241874175Z" level=info msg="RemoveContainer for \"cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4\"" Oct 27 23:36:10.244971 containerd[1490]: time="2025-10-27T23:36:10.244922626Z" level=info msg="RemoveContainer for \"cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4\" returns successfully" Oct 27 23:36:10.245267 kubelet[2589]: I1027 23:36:10.245242 2589 scope.go:117] "RemoveContainer" containerID="6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292" Oct 27 23:36:10.245558 containerd[1490]: time="2025-10-27T23:36:10.245527420Z" level=error msg="ContainerStatus for \"6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292\": not found" Oct 27 23:36:10.245748 kubelet[2589]: E1027 23:36:10.245693 2589 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292\": not found" containerID="6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292" Oct 27 23:36:10.250592 kubelet[2589]: I1027 23:36:10.250438 2589 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292"} err="failed to get container status \"6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292\": rpc error: code = NotFound desc = an error occurred when try to find container \"6ccf7b5b95320d8c7990f8fcdc24c9867c412dac5f77ef6708ef78737e1ae292\": not found" Oct 27 23:36:10.250592 kubelet[2589]: I1027 23:36:10.250592 2589 scope.go:117] "RemoveContainer" containerID="7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d" Oct 27 23:36:10.251068 containerd[1490]: time="2025-10-27T23:36:10.251007048Z" level=error msg="ContainerStatus for \"7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d\": not found" Oct 27 23:36:10.251849 kubelet[2589]: E1027 23:36:10.251542 2589 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d\": not found" containerID="7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d" Oct 27 23:36:10.251849 kubelet[2589]: I1027 23:36:10.251595 2589 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d"} err="failed to get container status \"7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d\": rpc error: code = NotFound desc = an error occurred when try to find container \"7b1813d5c52cd4ee17c3e34da5f4330688522ed8c31895195d4babf79a0f2d3d\": not found" Oct 27 23:36:10.251849 kubelet[2589]: I1027 23:36:10.251630 2589 scope.go:117] "RemoveContainer" containerID="653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3" Oct 27 23:36:10.252062 containerd[1490]: time="2025-10-27T23:36:10.252021398Z" level=error msg="ContainerStatus for \"653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3\": not found" Oct 27 23:36:10.252271 kubelet[2589]: E1027 23:36:10.252249 2589 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3\": not found" containerID="653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3" Oct 27 23:36:10.252301 kubelet[2589]: I1027 23:36:10.252277 2589 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3"} err="failed to get container status \"653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3\": rpc error: code = NotFound desc = an error occurred when try to find container \"653c1933c4050c855a07fde2cc30991b86369f238a0b59264a0b1f9e42703da3\": not found" Oct 27 23:36:10.252301 kubelet[2589]: I1027 23:36:10.252294 2589 scope.go:117] "RemoveContainer" containerID="db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af" Oct 27 23:36:10.252528 containerd[1490]: time="2025-10-27T23:36:10.252490114Z" level=error msg="ContainerStatus for \"db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af\": not found" Oct 27 23:36:10.252704 kubelet[2589]: E1027 23:36:10.252673 2589 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af\": not found" containerID="db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af" Oct 27 23:36:10.252743 kubelet[2589]: I1027 23:36:10.252707 2589 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af"} err="failed to get container status \"db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af\": rpc error: code = NotFound desc = an error occurred when try to find container \"db520633c4b576e67c7840d7992ce54d51cee0a9fc0e95431d4257b6e42a77af\": not found" Oct 27 23:36:10.252743 kubelet[2589]: I1027 23:36:10.252725 2589 scope.go:117] "RemoveContainer" containerID="cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4" Oct 27 23:36:10.252991 containerd[1490]: time="2025-10-27T23:36:10.252950549Z" level=error msg="ContainerStatus for \"cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4\": not found" Oct 27 23:36:10.253155 kubelet[2589]: E1027 23:36:10.253128 2589 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4\": not found" containerID="cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4" Oct 27 23:36:10.253191 kubelet[2589]: I1027 23:36:10.253167 2589 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4"} err="failed to get container status \"cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4\": rpc error: code = NotFound desc = an error occurred when try to find container \"cee1ef7903a782032c622ef1ee9403baf6f2c7ebb102bee185455881cf1d56d4\": not found" Oct 27 23:36:10.253191 kubelet[2589]: I1027 23:36:10.253184 2589 scope.go:117] "RemoveContainer" containerID="e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f" Oct 27 23:36:10.254263 containerd[1490]: time="2025-10-27T23:36:10.254230457Z" level=info msg="RemoveContainer for \"e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f\"" Oct 27 23:36:10.257322 containerd[1490]: time="2025-10-27T23:36:10.257270188Z" level=info msg="RemoveContainer for \"e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f\" returns successfully" Oct 27 23:36:10.257527 kubelet[2589]: I1027 23:36:10.257491 2589 scope.go:117] "RemoveContainer" containerID="e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f" Oct 27 23:36:10.257915 containerd[1490]: time="2025-10-27T23:36:10.257865863Z" level=error msg="ContainerStatus for \"e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f\": not found" Oct 27 23:36:10.258117 kubelet[2589]: E1027 23:36:10.258057 2589 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f\": not found" containerID="e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f" Oct 27 23:36:10.258220 kubelet[2589]: I1027 23:36:10.258122 2589 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f"} err="failed to get container status \"e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f\": rpc error: code = NotFound desc = an error occurred when try to find container \"e39af3d87c3a452df0560377a46d0b638f568d905137ce1d5e0a2a9e9f4fae4f\": not found" Oct 27 23:36:11.055544 sshd[4275]: Connection closed by 10.0.0.1 port 42698 Oct 27 23:36:11.055559 sshd-session[4272]: pam_unix(sshd:session): session closed for user core Oct 27 23:36:11.064938 systemd[1]: sshd@23-10.0.0.77:22-10.0.0.1:42698.service: Deactivated successfully. Oct 27 23:36:11.067417 systemd[1]: session-24.scope: Deactivated successfully. Oct 27 23:36:11.067774 systemd[1]: session-24.scope: Consumed 1.126s CPU time, 27.3M memory peak. Oct 27 23:36:11.068312 systemd-logind[1469]: Session 24 logged out. Waiting for processes to exit. Oct 27 23:36:11.076880 systemd[1]: Started sshd@24-10.0.0.77:22-10.0.0.1:54482.service - OpenSSH per-connection server daemon (10.0.0.1:54482). Oct 27 23:36:11.077732 systemd-logind[1469]: Removed session 24. Oct 27 23:36:11.119139 sshd[4433]: Accepted publickey for core from 10.0.0.1 port 54482 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:36:11.120543 sshd-session[4433]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:36:11.125112 systemd-logind[1469]: New session 25 of user core. Oct 27 23:36:11.135747 systemd[1]: Started session-25.scope - Session 25 of User core. Oct 27 23:36:11.793936 sshd[4436]: Connection closed by 10.0.0.1 port 54482 Oct 27 23:36:11.794389 sshd-session[4433]: pam_unix(sshd:session): session closed for user core Oct 27 23:36:11.811453 systemd[1]: sshd@24-10.0.0.77:22-10.0.0.1:54482.service: Deactivated successfully. Oct 27 23:36:11.817026 systemd[1]: session-25.scope: Deactivated successfully. Oct 27 23:36:11.818126 kubelet[2589]: I1027 23:36:11.817996 2589 memory_manager.go:355] "RemoveStaleState removing state" podUID="b83c2aca-9e90-405a-bc8d-1957cbc07032" containerName="cilium-operator" Oct 27 23:36:11.818126 kubelet[2589]: I1027 23:36:11.818032 2589 memory_manager.go:355] "RemoveStaleState removing state" podUID="800d64a0-164e-4232-a376-90c2c1bab9dc" containerName="cilium-agent" Oct 27 23:36:11.819357 systemd-logind[1469]: Session 25 logged out. Waiting for processes to exit. Oct 27 23:36:11.824538 systemd-logind[1469]: Removed session 25. Oct 27 23:36:11.833911 systemd[1]: Started sshd@25-10.0.0.77:22-10.0.0.1:54490.service - OpenSSH per-connection server daemon (10.0.0.1:54490). Oct 27 23:36:11.847627 systemd[1]: Created slice kubepods-burstable-podd443eb3a_7c7e_4b11_86c3_21fe47deab37.slice - libcontainer container kubepods-burstable-podd443eb3a_7c7e_4b11_86c3_21fe47deab37.slice. Oct 27 23:36:11.884462 sshd[4448]: Accepted publickey for core from 10.0.0.1 port 54490 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:36:11.885875 sshd-session[4448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:36:11.890647 systemd-logind[1469]: New session 26 of user core. Oct 27 23:36:11.899854 systemd[1]: Started session-26.scope - Session 26 of User core. Oct 27 23:36:11.909923 kubelet[2589]: I1027 23:36:11.909885 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d443eb3a-7c7e-4b11-86c3-21fe47deab37-hostproc\") pod \"cilium-7jx25\" (UID: \"d443eb3a-7c7e-4b11-86c3-21fe47deab37\") " pod="kube-system/cilium-7jx25" Oct 27 23:36:11.909923 kubelet[2589]: I1027 23:36:11.909929 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d443eb3a-7c7e-4b11-86c3-21fe47deab37-cilium-cgroup\") pod \"cilium-7jx25\" (UID: \"d443eb3a-7c7e-4b11-86c3-21fe47deab37\") " pod="kube-system/cilium-7jx25" Oct 27 23:36:11.909923 kubelet[2589]: I1027 23:36:11.909952 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w79wh\" (UniqueName: \"kubernetes.io/projected/d443eb3a-7c7e-4b11-86c3-21fe47deab37-kube-api-access-w79wh\") pod \"cilium-7jx25\" (UID: \"d443eb3a-7c7e-4b11-86c3-21fe47deab37\") " pod="kube-system/cilium-7jx25" Oct 27 23:36:11.910402 kubelet[2589]: I1027 23:36:11.909971 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d443eb3a-7c7e-4b11-86c3-21fe47deab37-cilium-ipsec-secrets\") pod \"cilium-7jx25\" (UID: \"d443eb3a-7c7e-4b11-86c3-21fe47deab37\") " pod="kube-system/cilium-7jx25" Oct 27 23:36:11.910402 kubelet[2589]: I1027 23:36:11.909986 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d443eb3a-7c7e-4b11-86c3-21fe47deab37-host-proc-sys-net\") pod \"cilium-7jx25\" (UID: \"d443eb3a-7c7e-4b11-86c3-21fe47deab37\") " pod="kube-system/cilium-7jx25" Oct 27 23:36:11.910402 kubelet[2589]: I1027 23:36:11.910000 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d443eb3a-7c7e-4b11-86c3-21fe47deab37-hubble-tls\") pod \"cilium-7jx25\" (UID: \"d443eb3a-7c7e-4b11-86c3-21fe47deab37\") " pod="kube-system/cilium-7jx25" Oct 27 23:36:11.910402 kubelet[2589]: I1027 23:36:11.910015 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d443eb3a-7c7e-4b11-86c3-21fe47deab37-lib-modules\") pod \"cilium-7jx25\" (UID: \"d443eb3a-7c7e-4b11-86c3-21fe47deab37\") " pod="kube-system/cilium-7jx25" Oct 27 23:36:11.910402 kubelet[2589]: I1027 23:36:11.910030 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d443eb3a-7c7e-4b11-86c3-21fe47deab37-cilium-run\") pod \"cilium-7jx25\" (UID: \"d443eb3a-7c7e-4b11-86c3-21fe47deab37\") " pod="kube-system/cilium-7jx25" Oct 27 23:36:11.910402 kubelet[2589]: I1027 23:36:11.910045 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d443eb3a-7c7e-4b11-86c3-21fe47deab37-host-proc-sys-kernel\") pod \"cilium-7jx25\" (UID: \"d443eb3a-7c7e-4b11-86c3-21fe47deab37\") " pod="kube-system/cilium-7jx25" Oct 27 23:36:11.910526 kubelet[2589]: I1027 23:36:11.910097 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d443eb3a-7c7e-4b11-86c3-21fe47deab37-bpf-maps\") pod \"cilium-7jx25\" (UID: \"d443eb3a-7c7e-4b11-86c3-21fe47deab37\") " pod="kube-system/cilium-7jx25" Oct 27 23:36:11.910526 kubelet[2589]: I1027 23:36:11.910141 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d443eb3a-7c7e-4b11-86c3-21fe47deab37-clustermesh-secrets\") pod \"cilium-7jx25\" (UID: \"d443eb3a-7c7e-4b11-86c3-21fe47deab37\") " pod="kube-system/cilium-7jx25" Oct 27 23:36:11.910526 kubelet[2589]: I1027 23:36:11.910165 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d443eb3a-7c7e-4b11-86c3-21fe47deab37-etc-cni-netd\") pod \"cilium-7jx25\" (UID: \"d443eb3a-7c7e-4b11-86c3-21fe47deab37\") " pod="kube-system/cilium-7jx25" Oct 27 23:36:11.910526 kubelet[2589]: I1027 23:36:11.910181 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d443eb3a-7c7e-4b11-86c3-21fe47deab37-xtables-lock\") pod \"cilium-7jx25\" (UID: \"d443eb3a-7c7e-4b11-86c3-21fe47deab37\") " pod="kube-system/cilium-7jx25" Oct 27 23:36:11.910526 kubelet[2589]: I1027 23:36:11.910197 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d443eb3a-7c7e-4b11-86c3-21fe47deab37-cni-path\") pod \"cilium-7jx25\" (UID: \"d443eb3a-7c7e-4b11-86c3-21fe47deab37\") " pod="kube-system/cilium-7jx25" Oct 27 23:36:11.910526 kubelet[2589]: I1027 23:36:11.910229 2589 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d443eb3a-7c7e-4b11-86c3-21fe47deab37-cilium-config-path\") pod \"cilium-7jx25\" (UID: \"d443eb3a-7c7e-4b11-86c3-21fe47deab37\") " pod="kube-system/cilium-7jx25" Oct 27 23:36:11.951404 sshd[4451]: Connection closed by 10.0.0.1 port 54490 Oct 27 23:36:11.953045 sshd-session[4448]: pam_unix(sshd:session): session closed for user core Oct 27 23:36:11.953795 kubelet[2589]: I1027 23:36:11.953763 2589 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="800d64a0-164e-4232-a376-90c2c1bab9dc" path="/var/lib/kubelet/pods/800d64a0-164e-4232-a376-90c2c1bab9dc/volumes" Oct 27 23:36:11.954428 kubelet[2589]: I1027 23:36:11.954403 2589 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b83c2aca-9e90-405a-bc8d-1957cbc07032" path="/var/lib/kubelet/pods/b83c2aca-9e90-405a-bc8d-1957cbc07032/volumes" Oct 27 23:36:11.961233 systemd[1]: sshd@25-10.0.0.77:22-10.0.0.1:54490.service: Deactivated successfully. Oct 27 23:36:11.962903 systemd[1]: session-26.scope: Deactivated successfully. Oct 27 23:36:11.963754 systemd-logind[1469]: Session 26 logged out. Waiting for processes to exit. Oct 27 23:36:11.965945 systemd[1]: Started sshd@26-10.0.0.77:22-10.0.0.1:54500.service - OpenSSH per-connection server daemon (10.0.0.1:54500). Oct 27 23:36:11.966745 systemd-logind[1469]: Removed session 26. Oct 27 23:36:12.006309 sshd[4457]: Accepted publickey for core from 10.0.0.1 port 54500 ssh2: RSA SHA256:TJBPbfBwCcmP9LAyc+zYRSjT7QEK4QwIU2BKsb1nH8U Oct 27 23:36:12.007710 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Oct 27 23:36:12.012510 systemd-logind[1469]: New session 27 of user core. Oct 27 23:36:12.018189 kubelet[2589]: E1027 23:36:12.017919 2589 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 27 23:36:12.018762 systemd[1]: Started session-27.scope - Session 27 of User core. Oct 27 23:36:12.159038 kubelet[2589]: E1027 23:36:12.158912 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:36:12.159740 containerd[1490]: time="2025-10-27T23:36:12.159396840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7jx25,Uid:d443eb3a-7c7e-4b11-86c3-21fe47deab37,Namespace:kube-system,Attempt:0,}" Oct 27 23:36:12.184905 containerd[1490]: time="2025-10-27T23:36:12.183981662Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 27 23:36:12.184905 containerd[1490]: time="2025-10-27T23:36:12.184892135Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 27 23:36:12.184905 containerd[1490]: time="2025-10-27T23:36:12.184912855Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:36:12.185065 containerd[1490]: time="2025-10-27T23:36:12.184989534Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 27 23:36:12.215809 systemd[1]: Started cri-containerd-aad8bce4f14c640f01f1e0102c341ffb179e1039f0791bf1ec9dbc5cfbccf05f.scope - libcontainer container aad8bce4f14c640f01f1e0102c341ffb179e1039f0791bf1ec9dbc5cfbccf05f. Oct 27 23:36:12.238771 containerd[1490]: time="2025-10-27T23:36:12.238728224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7jx25,Uid:d443eb3a-7c7e-4b11-86c3-21fe47deab37,Namespace:kube-system,Attempt:0,} returns sandbox id \"aad8bce4f14c640f01f1e0102c341ffb179e1039f0791bf1ec9dbc5cfbccf05f\"" Oct 27 23:36:12.239375 kubelet[2589]: E1027 23:36:12.239352 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:36:12.241620 containerd[1490]: time="2025-10-27T23:36:12.241586923Z" level=info msg="CreateContainer within sandbox \"aad8bce4f14c640f01f1e0102c341ffb179e1039f0791bf1ec9dbc5cfbccf05f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 27 23:36:12.253483 containerd[1490]: time="2025-10-27T23:36:12.253423557Z" level=info msg="CreateContainer within sandbox \"aad8bce4f14c640f01f1e0102c341ffb179e1039f0791bf1ec9dbc5cfbccf05f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"efa0a18ad89a797f2ebb27059456ba6def150fc775f37cd475e0e5c9d315b975\"" Oct 27 23:36:12.254930 containerd[1490]: time="2025-10-27T23:36:12.254895226Z" level=info msg="StartContainer for \"efa0a18ad89a797f2ebb27059456ba6def150fc775f37cd475e0e5c9d315b975\"" Oct 27 23:36:12.285790 systemd[1]: Started cri-containerd-efa0a18ad89a797f2ebb27059456ba6def150fc775f37cd475e0e5c9d315b975.scope - libcontainer container efa0a18ad89a797f2ebb27059456ba6def150fc775f37cd475e0e5c9d315b975. Oct 27 23:36:12.312151 containerd[1490]: time="2025-10-27T23:36:12.312110251Z" level=info msg="StartContainer for \"efa0a18ad89a797f2ebb27059456ba6def150fc775f37cd475e0e5c9d315b975\" returns successfully" Oct 27 23:36:12.319185 systemd[1]: cri-containerd-efa0a18ad89a797f2ebb27059456ba6def150fc775f37cd475e0e5c9d315b975.scope: Deactivated successfully. Oct 27 23:36:12.350980 containerd[1490]: time="2025-10-27T23:36:12.350886249Z" level=info msg="shim disconnected" id=efa0a18ad89a797f2ebb27059456ba6def150fc775f37cd475e0e5c9d315b975 namespace=k8s.io Oct 27 23:36:12.350980 containerd[1490]: time="2025-10-27T23:36:12.350938928Z" level=warning msg="cleaning up after shim disconnected" id=efa0a18ad89a797f2ebb27059456ba6def150fc775f37cd475e0e5c9d315b975 namespace=k8s.io Oct 27 23:36:12.350980 containerd[1490]: time="2025-10-27T23:36:12.350947208Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:36:13.020524 systemd[1]: run-containerd-runc-k8s.io-aad8bce4f14c640f01f1e0102c341ffb179e1039f0791bf1ec9dbc5cfbccf05f-runc.OT6Pq8.mount: Deactivated successfully. Oct 27 23:36:13.215109 kubelet[2589]: E1027 23:36:13.215077 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:36:13.218462 containerd[1490]: time="2025-10-27T23:36:13.218316096Z" level=info msg="CreateContainer within sandbox \"aad8bce4f14c640f01f1e0102c341ffb179e1039f0791bf1ec9dbc5cfbccf05f\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Oct 27 23:36:13.236572 containerd[1490]: time="2025-10-27T23:36:13.236504903Z" level=info msg="CreateContainer within sandbox \"aad8bce4f14c640f01f1e0102c341ffb179e1039f0791bf1ec9dbc5cfbccf05f\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0b2019ec1823ce4475a5fe68dbc56a716795ad0c011bf97f95e5c718e22e6972\"" Oct 27 23:36:13.237686 containerd[1490]: time="2025-10-27T23:36:13.237647776Z" level=info msg="StartContainer for \"0b2019ec1823ce4475a5fe68dbc56a716795ad0c011bf97f95e5c718e22e6972\"" Oct 27 23:36:13.263760 systemd[1]: Started cri-containerd-0b2019ec1823ce4475a5fe68dbc56a716795ad0c011bf97f95e5c718e22e6972.scope - libcontainer container 0b2019ec1823ce4475a5fe68dbc56a716795ad0c011bf97f95e5c718e22e6972. Oct 27 23:36:13.288496 containerd[1490]: time="2025-10-27T23:36:13.288455861Z" level=info msg="StartContainer for \"0b2019ec1823ce4475a5fe68dbc56a716795ad0c011bf97f95e5c718e22e6972\" returns successfully" Oct 27 23:36:13.294000 systemd[1]: cri-containerd-0b2019ec1823ce4475a5fe68dbc56a716795ad0c011bf97f95e5c718e22e6972.scope: Deactivated successfully. Oct 27 23:36:13.315968 containerd[1490]: time="2025-10-27T23:36:13.315910611Z" level=info msg="shim disconnected" id=0b2019ec1823ce4475a5fe68dbc56a716795ad0c011bf97f95e5c718e22e6972 namespace=k8s.io Oct 27 23:36:13.315968 containerd[1490]: time="2025-10-27T23:36:13.315966890Z" level=warning msg="cleaning up after shim disconnected" id=0b2019ec1823ce4475a5fe68dbc56a716795ad0c011bf97f95e5c718e22e6972 namespace=k8s.io Oct 27 23:36:13.315968 containerd[1490]: time="2025-10-27T23:36:13.315976210Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:36:13.451463 kubelet[2589]: I1027 23:36:13.451399 2589 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-27T23:36:13Z","lastTransitionTime":"2025-10-27T23:36:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Oct 27 23:36:14.021609 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b2019ec1823ce4475a5fe68dbc56a716795ad0c011bf97f95e5c718e22e6972-rootfs.mount: Deactivated successfully. Oct 27 23:36:14.219479 kubelet[2589]: E1027 23:36:14.219445 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:36:14.226857 containerd[1490]: time="2025-10-27T23:36:14.226815715Z" level=info msg="CreateContainer within sandbox \"aad8bce4f14c640f01f1e0102c341ffb179e1039f0791bf1ec9dbc5cfbccf05f\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Oct 27 23:36:14.249439 containerd[1490]: time="2025-10-27T23:36:14.249394878Z" level=info msg="CreateContainer within sandbox \"aad8bce4f14c640f01f1e0102c341ffb179e1039f0791bf1ec9dbc5cfbccf05f\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2c88131a65e0922923fc38ea8cfa26a5683d92307c38a5e5e80b4b7c371e7575\"" Oct 27 23:36:14.251011 containerd[1490]: time="2025-10-27T23:36:14.249961435Z" level=info msg="StartContainer for \"2c88131a65e0922923fc38ea8cfa26a5683d92307c38a5e5e80b4b7c371e7575\"" Oct 27 23:36:14.281761 systemd[1]: Started cri-containerd-2c88131a65e0922923fc38ea8cfa26a5683d92307c38a5e5e80b4b7c371e7575.scope - libcontainer container 2c88131a65e0922923fc38ea8cfa26a5683d92307c38a5e5e80b4b7c371e7575. Oct 27 23:36:14.310938 systemd[1]: cri-containerd-2c88131a65e0922923fc38ea8cfa26a5683d92307c38a5e5e80b4b7c371e7575.scope: Deactivated successfully. Oct 27 23:36:14.315470 containerd[1490]: time="2025-10-27T23:36:14.315424657Z" level=info msg="StartContainer for \"2c88131a65e0922923fc38ea8cfa26a5683d92307c38a5e5e80b4b7c371e7575\" returns successfully" Oct 27 23:36:14.340313 containerd[1490]: time="2025-10-27T23:36:14.340253928Z" level=info msg="shim disconnected" id=2c88131a65e0922923fc38ea8cfa26a5683d92307c38a5e5e80b4b7c371e7575 namespace=k8s.io Oct 27 23:36:14.340313 containerd[1490]: time="2025-10-27T23:36:14.340308328Z" level=warning msg="cleaning up after shim disconnected" id=2c88131a65e0922923fc38ea8cfa26a5683d92307c38a5e5e80b4b7c371e7575 namespace=k8s.io Oct 27 23:36:14.340313 containerd[1490]: time="2025-10-27T23:36:14.340316408Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:36:15.020901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c88131a65e0922923fc38ea8cfa26a5683d92307c38a5e5e80b4b7c371e7575-rootfs.mount: Deactivated successfully. Oct 27 23:36:15.225166 kubelet[2589]: E1027 23:36:15.225127 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:36:15.234047 containerd[1490]: time="2025-10-27T23:36:15.233687022Z" level=info msg="CreateContainer within sandbox \"aad8bce4f14c640f01f1e0102c341ffb179e1039f0791bf1ec9dbc5cfbccf05f\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Oct 27 23:36:15.251917 containerd[1490]: time="2025-10-27T23:36:15.251868786Z" level=info msg="CreateContainer within sandbox \"aad8bce4f14c640f01f1e0102c341ffb179e1039f0791bf1ec9dbc5cfbccf05f\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"736f8cdd38ec7123782ffdba56e7012f0cc16660dfe9d0cef66a6592104f6710\"" Oct 27 23:36:15.253741 containerd[1490]: time="2025-10-27T23:36:15.253702539Z" level=info msg="StartContainer for \"736f8cdd38ec7123782ffdba56e7012f0cc16660dfe9d0cef66a6592104f6710\"" Oct 27 23:36:15.284778 systemd[1]: Started cri-containerd-736f8cdd38ec7123782ffdba56e7012f0cc16660dfe9d0cef66a6592104f6710.scope - libcontainer container 736f8cdd38ec7123782ffdba56e7012f0cc16660dfe9d0cef66a6592104f6710. Oct 27 23:36:15.330646 systemd[1]: cri-containerd-736f8cdd38ec7123782ffdba56e7012f0cc16660dfe9d0cef66a6592104f6710.scope: Deactivated successfully. Oct 27 23:36:15.351154 containerd[1490]: time="2025-10-27T23:36:15.351039653Z" level=info msg="StartContainer for \"736f8cdd38ec7123782ffdba56e7012f0cc16660dfe9d0cef66a6592104f6710\" returns successfully" Oct 27 23:36:15.383715 containerd[1490]: time="2025-10-27T23:36:15.383545398Z" level=info msg="shim disconnected" id=736f8cdd38ec7123782ffdba56e7012f0cc16660dfe9d0cef66a6592104f6710 namespace=k8s.io Oct 27 23:36:15.384114 containerd[1490]: time="2025-10-27T23:36:15.383938476Z" level=warning msg="cleaning up after shim disconnected" id=736f8cdd38ec7123782ffdba56e7012f0cc16660dfe9d0cef66a6592104f6710 namespace=k8s.io Oct 27 23:36:15.384114 containerd[1490]: time="2025-10-27T23:36:15.383956436Z" level=info msg="cleaning up dead shim" namespace=k8s.io Oct 27 23:36:16.020913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-736f8cdd38ec7123782ffdba56e7012f0cc16660dfe9d0cef66a6592104f6710-rootfs.mount: Deactivated successfully. Oct 27 23:36:16.229236 kubelet[2589]: E1027 23:36:16.229200 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:36:16.235601 containerd[1490]: time="2025-10-27T23:36:16.233999238Z" level=info msg="CreateContainer within sandbox \"aad8bce4f14c640f01f1e0102c341ffb179e1039f0791bf1ec9dbc5cfbccf05f\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Oct 27 23:36:16.257010 containerd[1490]: time="2025-10-27T23:36:16.256235327Z" level=info msg="CreateContainer within sandbox \"aad8bce4f14c640f01f1e0102c341ffb179e1039f0791bf1ec9dbc5cfbccf05f\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9eae36ea3d316878a9c4fdc5cf4d400bd04072515d5b933b854f751607e4fc4a\"" Oct 27 23:36:16.257010 containerd[1490]: time="2025-10-27T23:36:16.256697245Z" level=info msg="StartContainer for \"9eae36ea3d316878a9c4fdc5cf4d400bd04072515d5b933b854f751607e4fc4a\"" Oct 27 23:36:16.288780 systemd[1]: Started cri-containerd-9eae36ea3d316878a9c4fdc5cf4d400bd04072515d5b933b854f751607e4fc4a.scope - libcontainer container 9eae36ea3d316878a9c4fdc5cf4d400bd04072515d5b933b854f751607e4fc4a. Oct 27 23:36:16.314281 containerd[1490]: time="2025-10-27T23:36:16.314232501Z" level=info msg="StartContainer for \"9eae36ea3d316878a9c4fdc5cf4d400bd04072515d5b933b854f751607e4fc4a\" returns successfully" Oct 27 23:36:16.605609 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Oct 27 23:36:17.234676 kubelet[2589]: E1027 23:36:17.234623 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:36:17.260478 kubelet[2589]: I1027 23:36:17.260400 2589 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7jx25" podStartSLOduration=6.260379357 podStartE2EDuration="6.260379357s" podCreationTimestamp="2025-10-27 23:36:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-27 23:36:17.25910348 +0000 UTC m=+85.384387841" watchObservedRunningTime="2025-10-27 23:36:17.260379357 +0000 UTC m=+85.385663718" Oct 27 23:36:17.949220 kubelet[2589]: E1027 23:36:17.949107 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:36:18.236541 kubelet[2589]: E1027 23:36:18.236447 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:36:18.949700 kubelet[2589]: E1027 23:36:18.949657 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:36:19.488053 systemd-networkd[1405]: lxc_health: Link UP Oct 27 23:36:19.505343 systemd-networkd[1405]: lxc_health: Gained carrier Oct 27 23:36:20.161106 kubelet[2589]: E1027 23:36:20.160857 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:36:20.239767 kubelet[2589]: E1027 23:36:20.239724 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:36:20.999728 systemd-networkd[1405]: lxc_health: Gained IPv6LL Oct 27 23:36:21.241469 kubelet[2589]: E1027 23:36:21.241413 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 27 23:36:24.787380 sshd[4464]: Connection closed by 10.0.0.1 port 54500 Oct 27 23:36:24.787971 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Oct 27 23:36:24.791504 systemd[1]: sshd@26-10.0.0.77:22-10.0.0.1:54500.service: Deactivated successfully. Oct 27 23:36:24.793685 systemd[1]: session-27.scope: Deactivated successfully. Oct 27 23:36:24.795160 systemd-logind[1469]: Session 27 logged out. Waiting for processes to exit. Oct 27 23:36:24.796398 systemd-logind[1469]: Removed session 27. Oct 27 23:36:24.948932 kubelet[2589]: E1027 23:36:24.948888 2589 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"