Nov 6 23:05:08.853092 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Nov 6 23:05:08.853113 kernel: Linux version 6.6.113-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Thu Nov 6 21:59:06 -00 2025 Nov 6 23:05:08.853122 kernel: KASLR enabled Nov 6 23:05:08.853128 kernel: efi: EFI v2.7 by EDK II Nov 6 23:05:08.853133 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Nov 6 23:05:08.853139 kernel: random: crng init done Nov 6 23:05:08.853145 kernel: secureboot: Secure boot disabled Nov 6 23:05:08.853151 kernel: ACPI: Early table checksum verification disabled Nov 6 23:05:08.853157 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Nov 6 23:05:08.853164 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Nov 6 23:05:08.853170 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:05:08.853176 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:05:08.853181 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:05:08.853187 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:05:08.853194 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:05:08.853202 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:05:08.853208 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:05:08.853214 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:05:08.853220 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Nov 6 23:05:08.853226 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Nov 6 23:05:08.853232 kernel: NUMA: Failed to initialise from firmware Nov 6 23:05:08.853238 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Nov 6 23:05:08.853244 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Nov 6 23:05:08.853250 kernel: Zone ranges: Nov 6 23:05:08.853256 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Nov 6 23:05:08.853263 kernel: DMA32 empty Nov 6 23:05:08.853269 kernel: Normal empty Nov 6 23:05:08.853275 kernel: Movable zone start for each node Nov 6 23:05:08.853281 kernel: Early memory node ranges Nov 6 23:05:08.853287 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Nov 6 23:05:08.853293 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Nov 6 23:05:08.853299 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Nov 6 23:05:08.853305 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Nov 6 23:05:08.853311 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Nov 6 23:05:08.853317 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Nov 6 23:05:08.853323 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Nov 6 23:05:08.853328 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Nov 6 23:05:08.853336 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Nov 6 23:05:08.853342 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Nov 6 23:05:08.853348 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Nov 6 23:05:08.853356 kernel: psci: probing for conduit method from ACPI. Nov 6 23:05:08.853363 kernel: psci: PSCIv1.1 detected in firmware. Nov 6 23:05:08.853369 kernel: psci: Using standard PSCI v0.2 function IDs Nov 6 23:05:08.853377 kernel: psci: Trusted OS migration not required Nov 6 23:05:08.853383 kernel: psci: SMC Calling Convention v1.1 Nov 6 23:05:08.853390 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Nov 6 23:05:08.853396 kernel: percpu: Embedded 31 pages/cpu s86120 r8192 d32664 u126976 Nov 6 23:05:08.853403 kernel: pcpu-alloc: s86120 r8192 d32664 u126976 alloc=31*4096 Nov 6 23:05:08.853409 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Nov 6 23:05:08.853416 kernel: Detected PIPT I-cache on CPU0 Nov 6 23:05:08.853422 kernel: CPU features: detected: GIC system register CPU interface Nov 6 23:05:08.853428 kernel: CPU features: detected: Hardware dirty bit management Nov 6 23:05:08.853435 kernel: CPU features: detected: Spectre-v4 Nov 6 23:05:08.853442 kernel: CPU features: detected: Spectre-BHB Nov 6 23:05:08.853448 kernel: CPU features: kernel page table isolation forced ON by KASLR Nov 6 23:05:08.853455 kernel: CPU features: detected: Kernel page table isolation (KPTI) Nov 6 23:05:08.853461 kernel: CPU features: detected: ARM erratum 1418040 Nov 6 23:05:08.853474 kernel: CPU features: detected: SSBS not fully self-synchronizing Nov 6 23:05:08.853482 kernel: alternatives: applying boot alternatives Nov 6 23:05:08.853489 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=463065366e5b9a391e66d180eedbf8fe1b0462c2e722921ef25580943d9b67c6 Nov 6 23:05:08.853496 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 6 23:05:08.853503 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Nov 6 23:05:08.853509 kernel: Fallback order for Node 0: 0 Nov 6 23:05:08.853516 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Nov 6 23:05:08.853525 kernel: Policy zone: DMA Nov 6 23:05:08.853531 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 6 23:05:08.853538 kernel: software IO TLB: area num 4. Nov 6 23:05:08.853544 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Nov 6 23:05:08.853551 kernel: Memory: 2387412K/2572288K available (10368K kernel code, 2180K rwdata, 8104K rodata, 38400K init, 897K bss, 184876K reserved, 0K cma-reserved) Nov 6 23:05:08.853558 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Nov 6 23:05:08.853564 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 6 23:05:08.853571 kernel: rcu: RCU event tracing is enabled. Nov 6 23:05:08.853578 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Nov 6 23:05:08.853584 kernel: Trampoline variant of Tasks RCU enabled. Nov 6 23:05:08.853591 kernel: Tracing variant of Tasks RCU enabled. Nov 6 23:05:08.853597 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 6 23:05:08.853605 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Nov 6 23:05:08.853612 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Nov 6 23:05:08.853618 kernel: GICv3: 256 SPIs implemented Nov 6 23:05:08.853625 kernel: GICv3: 0 Extended SPIs implemented Nov 6 23:05:08.853631 kernel: Root IRQ handler: gic_handle_irq Nov 6 23:05:08.853637 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Nov 6 23:05:08.853644 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Nov 6 23:05:08.853650 kernel: ITS [mem 0x08080000-0x0809ffff] Nov 6 23:05:08.853656 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Nov 6 23:05:08.853663 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Nov 6 23:05:08.853670 kernel: GICv3: using LPI property table @0x00000000400f0000 Nov 6 23:05:08.853677 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Nov 6 23:05:08.853684 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 6 23:05:08.853690 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 6 23:05:08.853697 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Nov 6 23:05:08.853703 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Nov 6 23:05:08.853710 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Nov 6 23:05:08.853716 kernel: arm-pv: using stolen time PV Nov 6 23:05:08.853723 kernel: Console: colour dummy device 80x25 Nov 6 23:05:08.853730 kernel: ACPI: Core revision 20230628 Nov 6 23:05:08.853736 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Nov 6 23:05:08.853743 kernel: pid_max: default: 32768 minimum: 301 Nov 6 23:05:08.853751 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 6 23:05:08.853758 kernel: landlock: Up and running. Nov 6 23:05:08.853764 kernel: SELinux: Initializing. Nov 6 23:05:08.853792 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 23:05:08.853799 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Nov 6 23:05:08.853806 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 23:05:08.853813 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Nov 6 23:05:08.853820 kernel: rcu: Hierarchical SRCU implementation. Nov 6 23:05:08.853827 kernel: rcu: Max phase no-delay instances is 400. Nov 6 23:05:08.853835 kernel: Platform MSI: ITS@0x8080000 domain created Nov 6 23:05:08.853842 kernel: PCI/MSI: ITS@0x8080000 domain created Nov 6 23:05:08.853849 kernel: Remapping and enabling EFI services. Nov 6 23:05:08.853855 kernel: smp: Bringing up secondary CPUs ... Nov 6 23:05:08.853862 kernel: Detected PIPT I-cache on CPU1 Nov 6 23:05:08.853869 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Nov 6 23:05:08.853875 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Nov 6 23:05:08.853882 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 6 23:05:08.853889 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Nov 6 23:05:08.853896 kernel: Detected PIPT I-cache on CPU2 Nov 6 23:05:08.853903 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Nov 6 23:05:08.853915 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Nov 6 23:05:08.853923 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 6 23:05:08.853930 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Nov 6 23:05:08.853937 kernel: Detected PIPT I-cache on CPU3 Nov 6 23:05:08.853944 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Nov 6 23:05:08.853951 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Nov 6 23:05:08.853959 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Nov 6 23:05:08.853966 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Nov 6 23:05:08.853973 kernel: smp: Brought up 1 node, 4 CPUs Nov 6 23:05:08.853980 kernel: SMP: Total of 4 processors activated. Nov 6 23:05:08.853987 kernel: CPU features: detected: 32-bit EL0 Support Nov 6 23:05:08.853994 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Nov 6 23:05:08.854001 kernel: CPU features: detected: Common not Private translations Nov 6 23:05:08.854008 kernel: CPU features: detected: CRC32 instructions Nov 6 23:05:08.854015 kernel: CPU features: detected: Enhanced Virtualization Traps Nov 6 23:05:08.854023 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Nov 6 23:05:08.854031 kernel: CPU features: detected: LSE atomic instructions Nov 6 23:05:08.854037 kernel: CPU features: detected: Privileged Access Never Nov 6 23:05:08.854044 kernel: CPU features: detected: RAS Extension Support Nov 6 23:05:08.854051 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Nov 6 23:05:08.854058 kernel: CPU: All CPU(s) started at EL1 Nov 6 23:05:08.854065 kernel: alternatives: applying system-wide alternatives Nov 6 23:05:08.854072 kernel: devtmpfs: initialized Nov 6 23:05:08.854079 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 6 23:05:08.854088 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Nov 6 23:05:08.854095 kernel: pinctrl core: initialized pinctrl subsystem Nov 6 23:05:08.854101 kernel: SMBIOS 3.0.0 present. Nov 6 23:05:08.854108 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Nov 6 23:05:08.854115 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 6 23:05:08.854122 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Nov 6 23:05:08.854129 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Nov 6 23:05:08.854136 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Nov 6 23:05:08.854143 kernel: audit: initializing netlink subsys (disabled) Nov 6 23:05:08.854152 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Nov 6 23:05:08.854159 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 6 23:05:08.854166 kernel: cpuidle: using governor menu Nov 6 23:05:08.854173 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Nov 6 23:05:08.854180 kernel: ASID allocator initialised with 32768 entries Nov 6 23:05:08.854187 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 6 23:05:08.854193 kernel: Serial: AMBA PL011 UART driver Nov 6 23:05:08.854200 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Nov 6 23:05:08.854208 kernel: Modules: 0 pages in range for non-PLT usage Nov 6 23:05:08.854216 kernel: Modules: 509248 pages in range for PLT usage Nov 6 23:05:08.854223 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 6 23:05:08.854230 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Nov 6 23:05:08.854237 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Nov 6 23:05:08.854244 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Nov 6 23:05:08.854251 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 6 23:05:08.854258 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Nov 6 23:05:08.854265 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Nov 6 23:05:08.854272 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Nov 6 23:05:08.854281 kernel: ACPI: Added _OSI(Module Device) Nov 6 23:05:08.854288 kernel: ACPI: Added _OSI(Processor Device) Nov 6 23:05:08.854295 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 6 23:05:08.854302 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 6 23:05:08.854308 kernel: ACPI: Interpreter enabled Nov 6 23:05:08.854315 kernel: ACPI: Using GIC for interrupt routing Nov 6 23:05:08.854322 kernel: ACPI: MCFG table detected, 1 entries Nov 6 23:05:08.854329 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Nov 6 23:05:08.854336 kernel: printk: console [ttyAMA0] enabled Nov 6 23:05:08.854343 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Nov 6 23:05:08.854503 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Nov 6 23:05:08.854587 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Nov 6 23:05:08.854654 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Nov 6 23:05:08.854718 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Nov 6 23:05:08.854833 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Nov 6 23:05:08.854844 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Nov 6 23:05:08.854855 kernel: PCI host bridge to bus 0000:00 Nov 6 23:05:08.854926 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Nov 6 23:05:08.854984 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Nov 6 23:05:08.855042 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Nov 6 23:05:08.855096 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Nov 6 23:05:08.855171 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Nov 6 23:05:08.855244 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Nov 6 23:05:08.855312 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Nov 6 23:05:08.855382 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Nov 6 23:05:08.855445 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Nov 6 23:05:08.855524 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Nov 6 23:05:08.855590 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Nov 6 23:05:08.855655 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Nov 6 23:05:08.855713 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Nov 6 23:05:08.855780 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Nov 6 23:05:08.855853 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Nov 6 23:05:08.855862 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Nov 6 23:05:08.855870 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Nov 6 23:05:08.855877 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Nov 6 23:05:08.855884 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Nov 6 23:05:08.855891 kernel: iommu: Default domain type: Translated Nov 6 23:05:08.855899 kernel: iommu: DMA domain TLB invalidation policy: strict mode Nov 6 23:05:08.855908 kernel: efivars: Registered efivars operations Nov 6 23:05:08.855915 kernel: vgaarb: loaded Nov 6 23:05:08.855937 kernel: clocksource: Switched to clocksource arch_sys_counter Nov 6 23:05:08.855944 kernel: VFS: Disk quotas dquot_6.6.0 Nov 6 23:05:08.855951 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 6 23:05:08.855974 kernel: pnp: PnP ACPI init Nov 6 23:05:08.856045 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Nov 6 23:05:08.856055 kernel: pnp: PnP ACPI: found 1 devices Nov 6 23:05:08.856137 kernel: NET: Registered PF_INET protocol family Nov 6 23:05:08.856154 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 6 23:05:08.856162 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Nov 6 23:05:08.856169 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 6 23:05:08.856176 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Nov 6 23:05:08.856183 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Nov 6 23:05:08.856191 kernel: TCP: Hash tables configured (established 32768 bind 32768) Nov 6 23:05:08.856198 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 23:05:08.856205 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Nov 6 23:05:08.856213 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 6 23:05:08.856220 kernel: PCI: CLS 0 bytes, default 64 Nov 6 23:05:08.856227 kernel: kvm [1]: HYP mode not available Nov 6 23:05:08.856234 kernel: Initialise system trusted keyrings Nov 6 23:05:08.856242 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Nov 6 23:05:08.856249 kernel: Key type asymmetric registered Nov 6 23:05:08.856256 kernel: Asymmetric key parser 'x509' registered Nov 6 23:05:08.856263 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Nov 6 23:05:08.856270 kernel: io scheduler mq-deadline registered Nov 6 23:05:08.856278 kernel: io scheduler kyber registered Nov 6 23:05:08.856285 kernel: io scheduler bfq registered Nov 6 23:05:08.856292 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Nov 6 23:05:08.856299 kernel: ACPI: button: Power Button [PWRB] Nov 6 23:05:08.856306 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Nov 6 23:05:08.856397 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Nov 6 23:05:08.856408 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 6 23:05:08.856415 kernel: thunder_xcv, ver 1.0 Nov 6 23:05:08.856422 kernel: thunder_bgx, ver 1.0 Nov 6 23:05:08.856431 kernel: nicpf, ver 1.0 Nov 6 23:05:08.856438 kernel: nicvf, ver 1.0 Nov 6 23:05:08.856531 kernel: rtc-efi rtc-efi.0: registered as rtc0 Nov 6 23:05:08.856594 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-11-06T23:05:08 UTC (1762470308) Nov 6 23:05:08.856604 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 6 23:05:08.856611 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Nov 6 23:05:08.856618 kernel: watchdog: Delayed init of the lockup detector failed: -19 Nov 6 23:05:08.856625 kernel: watchdog: Hard watchdog permanently disabled Nov 6 23:05:08.856635 kernel: NET: Registered PF_INET6 protocol family Nov 6 23:05:08.856642 kernel: Segment Routing with IPv6 Nov 6 23:05:08.856650 kernel: In-situ OAM (IOAM) with IPv6 Nov 6 23:05:08.856657 kernel: NET: Registered PF_PACKET protocol family Nov 6 23:05:08.856664 kernel: Key type dns_resolver registered Nov 6 23:05:08.856671 kernel: registered taskstats version 1 Nov 6 23:05:08.856678 kernel: Loading compiled-in X.509 certificates Nov 6 23:05:08.856685 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.113-flatcar: e53d3b094875ce4245a8b2684246260baeee1996' Nov 6 23:05:08.856692 kernel: Key type .fscrypt registered Nov 6 23:05:08.856699 kernel: Key type fscrypt-provisioning registered Nov 6 23:05:08.856707 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 6 23:05:08.856714 kernel: ima: Allocated hash algorithm: sha1 Nov 6 23:05:08.856721 kernel: ima: No architecture policies found Nov 6 23:05:08.856728 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Nov 6 23:05:08.856735 kernel: clk: Disabling unused clocks Nov 6 23:05:08.856742 kernel: Freeing unused kernel memory: 38400K Nov 6 23:05:08.856749 kernel: Run /init as init process Nov 6 23:05:08.856756 kernel: with arguments: Nov 6 23:05:08.856763 kernel: /init Nov 6 23:05:08.856815 kernel: with environment: Nov 6 23:05:08.856822 kernel: HOME=/ Nov 6 23:05:08.856829 kernel: TERM=linux Nov 6 23:05:08.856837 systemd[1]: Successfully made /usr/ read-only. Nov 6 23:05:08.856846 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:05:08.856854 systemd[1]: Detected virtualization kvm. Nov 6 23:05:08.856867 systemd[1]: Detected architecture arm64. Nov 6 23:05:08.856876 systemd[1]: Running in initrd. Nov 6 23:05:08.856884 systemd[1]: No hostname configured, using default hostname. Nov 6 23:05:08.856891 systemd[1]: Hostname set to . Nov 6 23:05:08.856899 systemd[1]: Initializing machine ID from VM UUID. Nov 6 23:05:08.856906 systemd[1]: Queued start job for default target initrd.target. Nov 6 23:05:08.856914 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:05:08.856922 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:05:08.856930 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 6 23:05:08.856939 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:05:08.856946 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 6 23:05:08.856955 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 6 23:05:08.856964 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 6 23:05:08.856972 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 6 23:05:08.856980 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:05:08.856988 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:05:08.856997 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:05:08.857005 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:05:08.857012 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:05:08.857020 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:05:08.857027 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:05:08.857035 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:05:08.857042 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 6 23:05:08.857050 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Nov 6 23:05:08.857057 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:05:08.857066 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:05:08.857074 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:05:08.857081 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:05:08.857089 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 6 23:05:08.857096 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:05:08.857104 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 6 23:05:08.857112 systemd[1]: Starting systemd-fsck-usr.service... Nov 6 23:05:08.857119 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:05:08.857128 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:05:08.857137 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:05:08.857145 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 6 23:05:08.857153 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:05:08.857162 systemd[1]: Finished systemd-fsck-usr.service. Nov 6 23:05:08.857195 systemd-journald[240]: Collecting audit messages is disabled. Nov 6 23:05:08.857215 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:05:08.857224 systemd-journald[240]: Journal started Nov 6 23:05:08.857246 systemd-journald[240]: Runtime Journal (/run/log/journal/0121580f92fe464f9e14201033a2060b) is 5.9M, max 47.3M, 41.4M free. Nov 6 23:05:08.861838 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 6 23:05:08.849313 systemd-modules-load[241]: Inserted module 'overlay' Nov 6 23:05:08.865049 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:05:08.867296 kernel: Bridge firewalling registered Nov 6 23:05:08.867334 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 6 23:05:08.866730 systemd-modules-load[241]: Inserted module 'br_netfilter' Nov 6 23:05:08.872661 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:05:08.873130 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:05:08.875794 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 6 23:05:08.882964 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:05:08.884647 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:05:08.888589 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:05:08.892202 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:05:08.895714 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:05:08.897193 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:05:08.899884 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:05:08.903455 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 6 23:05:08.906106 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:05:08.924892 dracut-cmdline[279]: dracut-dracut-053 Nov 6 23:05:08.927428 dracut-cmdline[279]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=463065366e5b9a391e66d180eedbf8fe1b0462c2e722921ef25580943d9b67c6 Nov 6 23:05:08.936863 systemd-resolved[280]: Positive Trust Anchors: Nov 6 23:05:08.936879 systemd-resolved[280]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:05:08.936911 systemd-resolved[280]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:05:08.942885 systemd-resolved[280]: Defaulting to hostname 'linux'. Nov 6 23:05:08.944247 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:05:08.948622 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:05:08.999794 kernel: SCSI subsystem initialized Nov 6 23:05:09.003795 kernel: Loading iSCSI transport class v2.0-870. Nov 6 23:05:09.011842 kernel: iscsi: registered transport (tcp) Nov 6 23:05:09.024787 kernel: iscsi: registered transport (qla4xxx) Nov 6 23:05:09.024806 kernel: QLogic iSCSI HBA Driver Nov 6 23:05:09.066154 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 6 23:05:09.078910 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 6 23:05:09.093998 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 6 23:05:09.094027 kernel: device-mapper: uevent: version 1.0.3 Nov 6 23:05:09.095167 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 6 23:05:09.139812 kernel: raid6: neonx8 gen() 15767 MB/s Nov 6 23:05:09.156805 kernel: raid6: neonx4 gen() 15815 MB/s Nov 6 23:05:09.173797 kernel: raid6: neonx2 gen() 13265 MB/s Nov 6 23:05:09.190793 kernel: raid6: neonx1 gen() 10517 MB/s Nov 6 23:05:09.207802 kernel: raid6: int64x8 gen() 6791 MB/s Nov 6 23:05:09.224798 kernel: raid6: int64x4 gen() 7346 MB/s Nov 6 23:05:09.241802 kernel: raid6: int64x2 gen() 6104 MB/s Nov 6 23:05:09.259078 kernel: raid6: int64x1 gen() 5044 MB/s Nov 6 23:05:09.259103 kernel: raid6: using algorithm neonx4 gen() 15815 MB/s Nov 6 23:05:09.277048 kernel: raid6: .... xor() 12419 MB/s, rmw enabled Nov 6 23:05:09.277085 kernel: raid6: using neon recovery algorithm Nov 6 23:05:09.283292 kernel: xor: measuring software checksum speed Nov 6 23:05:09.283308 kernel: 8regs : 21630 MB/sec Nov 6 23:05:09.283317 kernel: 32regs : 21699 MB/sec Nov 6 23:05:09.284001 kernel: arm64_neon : 27841 MB/sec Nov 6 23:05:09.284015 kernel: xor: using function: arm64_neon (27841 MB/sec) Nov 6 23:05:09.332803 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 6 23:05:09.343751 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:05:09.362904 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:05:09.377306 systemd-udevd[464]: Using default interface naming scheme 'v255'. Nov 6 23:05:09.380955 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:05:09.398954 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 6 23:05:09.410182 dracut-pre-trigger[473]: rd.md=0: removing MD RAID activation Nov 6 23:05:09.435813 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:05:09.451997 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:05:09.493211 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:05:09.502974 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 6 23:05:09.515042 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 6 23:05:09.518292 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:05:09.520130 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:05:09.522615 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:05:09.528947 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 6 23:05:09.541972 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:05:09.545827 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Nov 6 23:05:09.546894 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Nov 6 23:05:09.554177 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Nov 6 23:05:09.554205 kernel: GPT:9289727 != 19775487 Nov 6 23:05:09.554215 kernel: GPT:Alternate GPT header not at the end of the disk. Nov 6 23:05:09.554812 kernel: GPT:9289727 != 19775487 Nov 6 23:05:09.558784 kernel: GPT: Use GNU Parted to correct GPT errors. Nov 6 23:05:09.558822 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:05:09.571064 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:05:09.571189 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:05:09.576575 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:05:09.580492 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:05:09.580726 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:05:09.585777 kernel: BTRFS: device fsid 8ac35527-52fd-4925-acbb-f12804e07c02 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (513) Nov 6 23:05:09.585760 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:05:09.592984 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:05:09.595240 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (523) Nov 6 23:05:09.609231 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Nov 6 23:05:09.611814 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:05:09.622927 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Nov 6 23:05:09.624358 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Nov 6 23:05:09.637578 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Nov 6 23:05:09.645168 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 23:05:09.660919 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 6 23:05:09.665924 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 6 23:05:09.667492 disk-uuid[554]: Primary Header is updated. Nov 6 23:05:09.667492 disk-uuid[554]: Secondary Entries is updated. Nov 6 23:05:09.667492 disk-uuid[554]: Secondary Header is updated. Nov 6 23:05:09.671898 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:05:09.684983 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:05:10.677788 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Nov 6 23:05:10.678380 disk-uuid[555]: The operation has completed successfully. Nov 6 23:05:10.701988 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 6 23:05:10.702111 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 6 23:05:10.742933 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 6 23:05:10.745812 sh[575]: Success Nov 6 23:05:10.755791 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Nov 6 23:05:10.786853 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 6 23:05:10.803245 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 6 23:05:10.805566 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 6 23:05:10.816203 kernel: BTRFS info (device dm-0): first mount of filesystem 8ac35527-52fd-4925-acbb-f12804e07c02 Nov 6 23:05:10.816245 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Nov 6 23:05:10.817491 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 6 23:05:10.817508 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 6 23:05:10.819166 kernel: BTRFS info (device dm-0): using free space tree Nov 6 23:05:10.822909 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 6 23:05:10.824371 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 6 23:05:10.832944 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 6 23:05:10.834789 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 6 23:05:10.850743 kernel: BTRFS info (device vda6): first mount of filesystem 9553d21b-1d44-4f16-bc6d-739b0555444a Nov 6 23:05:10.850814 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 6 23:05:10.850825 kernel: BTRFS info (device vda6): using free space tree Nov 6 23:05:10.853787 kernel: BTRFS info (device vda6): auto enabling async discard Nov 6 23:05:10.859083 kernel: BTRFS info (device vda6): last unmount of filesystem 9553d21b-1d44-4f16-bc6d-739b0555444a Nov 6 23:05:10.862008 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 6 23:05:10.870209 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 6 23:05:10.929820 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:05:10.937970 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:05:10.940445 ignition[667]: Ignition 2.20.0 Nov 6 23:05:10.940454 ignition[667]: Stage: fetch-offline Nov 6 23:05:10.940498 ignition[667]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:05:10.940506 ignition[667]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:05:10.940665 ignition[667]: parsed url from cmdline: "" Nov 6 23:05:10.940669 ignition[667]: no config URL provided Nov 6 23:05:10.940674 ignition[667]: reading system config file "/usr/lib/ignition/user.ign" Nov 6 23:05:10.940681 ignition[667]: no config at "/usr/lib/ignition/user.ign" Nov 6 23:05:10.940704 ignition[667]: op(1): [started] loading QEMU firmware config module Nov 6 23:05:10.940708 ignition[667]: op(1): executing: "modprobe" "qemu_fw_cfg" Nov 6 23:05:10.949366 ignition[667]: op(1): [finished] loading QEMU firmware config module Nov 6 23:05:10.949386 ignition[667]: QEMU firmware config was not found. Ignoring... Nov 6 23:05:10.966921 systemd-networkd[763]: lo: Link UP Nov 6 23:05:10.966930 systemd-networkd[763]: lo: Gained carrier Nov 6 23:05:10.967724 systemd-networkd[763]: Enumeration completed Nov 6 23:05:10.967811 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:05:10.968153 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:05:10.968158 systemd-networkd[763]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:05:10.968992 systemd-networkd[763]: eth0: Link UP Nov 6 23:05:10.968995 systemd-networkd[763]: eth0: Gained carrier Nov 6 23:05:10.969002 systemd-networkd[763]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:05:10.970108 systemd[1]: Reached target network.target - Network. Nov 6 23:05:10.989827 systemd-networkd[763]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 23:05:11.006661 ignition[667]: parsing config with SHA512: 346e9f49ceaee819c32822949b881d0ec4abe17a02d824736212fd2d98cb9f622dc4a365a40483c9be61709872a3a16296ebf9f87c8aad4c0240bf7de435c154 Nov 6 23:05:11.012137 unknown[667]: fetched base config from "system" Nov 6 23:05:11.012147 unknown[667]: fetched user config from "qemu" Nov 6 23:05:11.014070 ignition[667]: fetch-offline: fetch-offline passed Nov 6 23:05:11.014188 ignition[667]: Ignition finished successfully Nov 6 23:05:11.019501 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:05:11.021188 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Nov 6 23:05:11.032974 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 6 23:05:11.045670 ignition[771]: Ignition 2.20.0 Nov 6 23:05:11.045680 ignition[771]: Stage: kargs Nov 6 23:05:11.045860 ignition[771]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:05:11.045871 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:05:11.049445 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 6 23:05:11.046785 ignition[771]: kargs: kargs passed Nov 6 23:05:11.046833 ignition[771]: Ignition finished successfully Nov 6 23:05:11.067953 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 6 23:05:11.078323 ignition[780]: Ignition 2.20.0 Nov 6 23:05:11.078334 ignition[780]: Stage: disks Nov 6 23:05:11.078505 ignition[780]: no configs at "/usr/lib/ignition/base.d" Nov 6 23:05:11.078515 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:05:11.079386 ignition[780]: disks: disks passed Nov 6 23:05:11.081763 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 6 23:05:11.079429 ignition[780]: Ignition finished successfully Nov 6 23:05:11.084036 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 6 23:05:11.085636 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 6 23:05:11.087885 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:05:11.089669 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:05:11.091797 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:05:11.109937 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 6 23:05:11.118514 systemd-resolved[280]: Detected conflict on linux IN A 10.0.0.7 Nov 6 23:05:11.118529 systemd-resolved[280]: Hostname conflict, changing published hostname from 'linux' to 'linux7'. Nov 6 23:05:11.121610 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Nov 6 23:05:11.124085 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 6 23:05:11.141902 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 6 23:05:11.182803 kernel: EXT4-fs (vda9): mounted filesystem 93ef6c07-4a07-4e6a-86ce-df7a94c95ac7 r/w with ordered data mode. Quota mode: none. Nov 6 23:05:11.182944 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 6 23:05:11.184321 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 6 23:05:11.197876 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:05:11.199941 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 6 23:05:11.201442 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Nov 6 23:05:11.201497 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 6 23:05:11.214574 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (799) Nov 6 23:05:11.214598 kernel: BTRFS info (device vda6): first mount of filesystem 9553d21b-1d44-4f16-bc6d-739b0555444a Nov 6 23:05:11.214608 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 6 23:05:11.214625 kernel: BTRFS info (device vda6): using free space tree Nov 6 23:05:11.214634 kernel: BTRFS info (device vda6): auto enabling async discard Nov 6 23:05:11.201522 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:05:11.206372 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 6 23:05:11.208958 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 6 23:05:11.216621 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:05:11.248402 initrd-setup-root[824]: cut: /sysroot/etc/passwd: No such file or directory Nov 6 23:05:11.253058 initrd-setup-root[831]: cut: /sysroot/etc/group: No such file or directory Nov 6 23:05:11.256252 initrd-setup-root[838]: cut: /sysroot/etc/shadow: No such file or directory Nov 6 23:05:11.259543 initrd-setup-root[845]: cut: /sysroot/etc/gshadow: No such file or directory Nov 6 23:05:11.330786 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 6 23:05:11.340882 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 6 23:05:11.343230 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 6 23:05:11.348779 kernel: BTRFS info (device vda6): last unmount of filesystem 9553d21b-1d44-4f16-bc6d-739b0555444a Nov 6 23:05:11.362552 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 6 23:05:11.366683 ignition[914]: INFO : Ignition 2.20.0 Nov 6 23:05:11.366683 ignition[914]: INFO : Stage: mount Nov 6 23:05:11.368917 ignition[914]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:05:11.368917 ignition[914]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:05:11.368917 ignition[914]: INFO : mount: mount passed Nov 6 23:05:11.368917 ignition[914]: INFO : Ignition finished successfully Nov 6 23:05:11.370710 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 6 23:05:11.382902 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 6 23:05:11.942834 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 6 23:05:11.951941 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 6 23:05:11.959383 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (926) Nov 6 23:05:11.959422 kernel: BTRFS info (device vda6): first mount of filesystem 9553d21b-1d44-4f16-bc6d-739b0555444a Nov 6 23:05:11.960489 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Nov 6 23:05:11.960505 kernel: BTRFS info (device vda6): using free space tree Nov 6 23:05:11.963792 kernel: BTRFS info (device vda6): auto enabling async discard Nov 6 23:05:11.964603 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 6 23:05:11.988326 ignition[943]: INFO : Ignition 2.20.0 Nov 6 23:05:11.988326 ignition[943]: INFO : Stage: files Nov 6 23:05:11.990182 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:05:11.990182 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:05:11.990182 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Nov 6 23:05:11.993939 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 6 23:05:11.993939 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 6 23:05:11.993939 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 6 23:05:11.993939 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 6 23:05:11.993939 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 6 23:05:11.993322 unknown[943]: wrote ssh authorized keys file for user: core Nov 6 23:05:12.002205 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 6 23:05:12.002205 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Nov 6 23:05:12.049914 systemd-networkd[763]: eth0: Gained IPv6LL Nov 6 23:05:12.132384 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 6 23:05:12.567647 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Nov 6 23:05:12.569753 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:05:12.569753 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Nov 6 23:05:12.776031 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Nov 6 23:05:12.867686 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Nov 6 23:05:12.867686 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Nov 6 23:05:12.871728 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Nov 6 23:05:12.871728 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:05:12.871728 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 6 23:05:12.871728 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:05:12.871728 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 6 23:05:12.871728 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:05:12.871728 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 6 23:05:12.871728 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:05:12.871728 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 6 23:05:12.871728 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 6 23:05:12.871728 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 6 23:05:12.871728 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 6 23:05:12.871728 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Nov 6 23:05:13.392050 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Nov 6 23:05:14.101550 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Nov 6 23:05:14.104070 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Nov 6 23:05:14.104070 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:05:14.104070 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 6 23:05:14.104070 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Nov 6 23:05:14.104070 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Nov 6 23:05:14.104070 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 23:05:14.104070 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Nov 6 23:05:14.104070 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Nov 6 23:05:14.104070 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Nov 6 23:05:14.120168 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 23:05:14.122335 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Nov 6 23:05:14.122335 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Nov 6 23:05:14.122335 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Nov 6 23:05:14.122335 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Nov 6 23:05:14.122335 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:05:14.122335 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 6 23:05:14.122335 ignition[943]: INFO : files: files passed Nov 6 23:05:14.122335 ignition[943]: INFO : Ignition finished successfully Nov 6 23:05:14.125383 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 6 23:05:14.139984 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 6 23:05:14.142026 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 6 23:05:14.144435 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 6 23:05:14.144545 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 6 23:05:14.150385 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory Nov 6 23:05:14.152293 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:05:14.152293 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:05:14.155562 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 6 23:05:14.158799 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:05:14.160209 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 6 23:05:14.168905 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 6 23:05:14.187846 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 6 23:05:14.187952 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 6 23:05:14.190361 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 6 23:05:14.192452 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 6 23:05:14.194432 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 6 23:05:14.195218 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 6 23:05:14.210387 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:05:14.227932 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 6 23:05:14.236665 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:05:14.238094 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:05:14.240297 systemd[1]: Stopped target timers.target - Timer Units. Nov 6 23:05:14.242253 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 6 23:05:14.242377 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 6 23:05:14.245008 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 6 23:05:14.247161 systemd[1]: Stopped target basic.target - Basic System. Nov 6 23:05:14.248905 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 6 23:05:14.250881 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 6 23:05:14.253021 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 6 23:05:14.255093 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 6 23:05:14.257055 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 6 23:05:14.259152 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 6 23:05:14.261323 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 6 23:05:14.263214 systemd[1]: Stopped target swap.target - Swaps. Nov 6 23:05:14.264889 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 6 23:05:14.265030 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 6 23:05:14.267637 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:05:14.268944 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:05:14.271088 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 6 23:05:14.271837 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:05:14.273295 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 6 23:05:14.273418 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 6 23:05:14.276310 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 6 23:05:14.276437 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 6 23:05:14.279117 systemd[1]: Stopped target paths.target - Path Units. Nov 6 23:05:14.280812 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 6 23:05:14.285820 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:05:14.287373 systemd[1]: Stopped target slices.target - Slice Units. Nov 6 23:05:14.289762 systemd[1]: Stopped target sockets.target - Socket Units. Nov 6 23:05:14.291577 systemd[1]: iscsid.socket: Deactivated successfully. Nov 6 23:05:14.291669 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 6 23:05:14.293452 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 6 23:05:14.293538 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 6 23:05:14.295244 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 6 23:05:14.295359 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 6 23:05:14.297316 systemd[1]: ignition-files.service: Deactivated successfully. Nov 6 23:05:14.297418 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 6 23:05:14.310985 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 6 23:05:14.312067 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 6 23:05:14.312224 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:05:14.318011 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 6 23:05:14.318950 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 6 23:05:14.319092 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:05:14.327279 ignition[997]: INFO : Ignition 2.20.0 Nov 6 23:05:14.327279 ignition[997]: INFO : Stage: umount Nov 6 23:05:14.327279 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 6 23:05:14.327279 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Nov 6 23:05:14.327279 ignition[997]: INFO : umount: umount passed Nov 6 23:05:14.327279 ignition[997]: INFO : Ignition finished successfully Nov 6 23:05:14.322025 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 6 23:05:14.322138 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 6 23:05:14.325924 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 6 23:05:14.327857 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 6 23:05:14.331525 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 6 23:05:14.333036 systemd[1]: Stopped target network.target - Network. Nov 6 23:05:14.334413 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 6 23:05:14.334506 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 6 23:05:14.336490 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 6 23:05:14.336541 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 6 23:05:14.338535 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 6 23:05:14.338582 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 6 23:05:14.340715 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 6 23:05:14.340759 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 6 23:05:14.343151 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 6 23:05:14.345092 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 6 23:05:14.349163 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 6 23:05:14.349254 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 6 23:05:14.351468 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 6 23:05:14.351559 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 6 23:05:14.356404 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Nov 6 23:05:14.356692 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 6 23:05:14.356796 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 6 23:05:14.360798 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Nov 6 23:05:14.363060 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 6 23:05:14.363094 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:05:14.372861 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 6 23:05:14.374027 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 6 23:05:14.374093 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 6 23:05:14.376504 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:05:14.376552 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:05:14.380043 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 6 23:05:14.380096 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 6 23:05:14.382357 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 6 23:05:14.382405 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:05:14.385752 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:05:14.389036 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Nov 6 23:05:14.389093 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:05:14.400218 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 6 23:05:14.400320 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 6 23:05:14.403136 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 6 23:05:14.403224 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 6 23:05:14.405136 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 6 23:05:14.405180 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 6 23:05:14.408401 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 6 23:05:14.408548 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:05:14.410390 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 6 23:05:14.410430 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 6 23:05:14.412199 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 6 23:05:14.412233 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:05:14.414298 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 6 23:05:14.414354 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 6 23:05:14.417350 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 6 23:05:14.417400 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 6 23:05:14.419471 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 6 23:05:14.419521 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 6 23:05:14.433944 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 6 23:05:14.435089 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 6 23:05:14.435159 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:05:14.438672 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 6 23:05:14.438722 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:05:14.442740 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Nov 6 23:05:14.442811 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Nov 6 23:05:14.443081 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 6 23:05:14.443176 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 6 23:05:14.444857 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 6 23:05:14.447648 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 6 23:05:14.457339 systemd[1]: Switching root. Nov 6 23:05:14.481884 systemd-journald[240]: Journal stopped Nov 6 23:05:15.255830 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). Nov 6 23:05:15.255890 kernel: SELinux: policy capability network_peer_controls=1 Nov 6 23:05:15.255903 kernel: SELinux: policy capability open_perms=1 Nov 6 23:05:15.255913 kernel: SELinux: policy capability extended_socket_class=1 Nov 6 23:05:15.255922 kernel: SELinux: policy capability always_check_network=0 Nov 6 23:05:15.255932 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 6 23:05:15.255941 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 6 23:05:15.255954 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 6 23:05:15.255963 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 6 23:05:15.255977 kernel: audit: type=1403 audit(1762470314.642:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 6 23:05:15.255988 systemd[1]: Successfully loaded SELinux policy in 32.580ms. Nov 6 23:05:15.256008 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.609ms. Nov 6 23:05:15.256019 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Nov 6 23:05:15.256030 systemd[1]: Detected virtualization kvm. Nov 6 23:05:15.256040 systemd[1]: Detected architecture arm64. Nov 6 23:05:15.256066 systemd[1]: Detected first boot. Nov 6 23:05:15.256077 systemd[1]: Initializing machine ID from VM UUID. Nov 6 23:05:15.256087 zram_generator::config[1044]: No configuration found. Nov 6 23:05:15.256098 kernel: NET: Registered PF_VSOCK protocol family Nov 6 23:05:15.256108 systemd[1]: Populated /etc with preset unit settings. Nov 6 23:05:15.256119 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Nov 6 23:05:15.256129 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 6 23:05:15.256140 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 6 23:05:15.256150 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 6 23:05:15.256161 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 6 23:05:15.256172 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 6 23:05:15.256182 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 6 23:05:15.256192 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 6 23:05:15.256202 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 6 23:05:15.256212 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 6 23:05:15.256223 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 6 23:05:15.256232 systemd[1]: Created slice user.slice - User and Session Slice. Nov 6 23:05:15.256244 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 6 23:05:15.256254 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 6 23:05:15.256264 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 6 23:05:15.256274 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 6 23:05:15.256284 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 6 23:05:15.256294 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 6 23:05:15.256304 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Nov 6 23:05:15.256314 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 6 23:05:15.256324 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 6 23:05:15.256336 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 6 23:05:15.256346 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 6 23:05:15.256357 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 6 23:05:15.256367 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 6 23:05:15.256377 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 6 23:05:15.256386 systemd[1]: Reached target slices.target - Slice Units. Nov 6 23:05:15.256396 systemd[1]: Reached target swap.target - Swaps. Nov 6 23:05:15.256406 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 6 23:05:15.256417 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 6 23:05:15.256428 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Nov 6 23:05:15.256446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 6 23:05:15.256459 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 6 23:05:15.256470 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 6 23:05:15.256480 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 6 23:05:15.256490 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 6 23:05:15.256501 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 6 23:05:15.256510 systemd[1]: Mounting media.mount - External Media Directory... Nov 6 23:05:15.256522 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 6 23:05:15.256533 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 6 23:05:15.256542 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 6 23:05:15.256553 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 6 23:05:15.256563 systemd[1]: Reached target machines.target - Containers. Nov 6 23:05:15.256573 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 6 23:05:15.256584 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:05:15.256594 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 6 23:05:15.256604 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 6 23:05:15.256615 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:05:15.256625 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:05:15.256635 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:05:15.256645 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 6 23:05:15.256655 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:05:15.256665 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 6 23:05:15.256676 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 6 23:05:15.256686 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 6 23:05:15.256697 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 6 23:05:15.256707 systemd[1]: Stopped systemd-fsck-usr.service. Nov 6 23:05:15.256717 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:05:15.256727 kernel: fuse: init (API version 7.39) Nov 6 23:05:15.256736 kernel: loop: module loaded Nov 6 23:05:15.256746 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 6 23:05:15.256755 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 6 23:05:15.256773 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 6 23:05:15.256791 kernel: ACPI: bus type drm_connector registered Nov 6 23:05:15.256803 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 6 23:05:15.256836 systemd-journald[1116]: Collecting audit messages is disabled. Nov 6 23:05:15.256864 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Nov 6 23:05:15.256875 systemd-journald[1116]: Journal started Nov 6 23:05:15.256895 systemd-journald[1116]: Runtime Journal (/run/log/journal/0121580f92fe464f9e14201033a2060b) is 5.9M, max 47.3M, 41.4M free. Nov 6 23:05:15.026760 systemd[1]: Queued start job for default target multi-user.target. Nov 6 23:05:15.039697 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Nov 6 23:05:15.040074 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 6 23:05:15.261794 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 6 23:05:15.264244 systemd[1]: verity-setup.service: Deactivated successfully. Nov 6 23:05:15.264277 systemd[1]: Stopped verity-setup.service. Nov 6 23:05:15.271759 systemd[1]: Started systemd-journald.service - Journal Service. Nov 6 23:05:15.272680 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 6 23:05:15.274284 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 6 23:05:15.275940 systemd[1]: Mounted media.mount - External Media Directory. Nov 6 23:05:15.277574 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 6 23:05:15.279824 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 6 23:05:15.281123 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 6 23:05:15.284820 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 6 23:05:15.286725 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 6 23:05:15.288689 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 6 23:05:15.288874 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 6 23:05:15.290511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:05:15.290675 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:05:15.292559 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:05:15.292740 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:05:15.294274 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:05:15.294466 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:05:15.296339 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 6 23:05:15.296499 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 6 23:05:15.298118 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:05:15.298287 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:05:15.300231 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 6 23:05:15.301842 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 6 23:05:15.303572 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 6 23:05:15.305664 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Nov 6 23:05:15.316251 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 6 23:05:15.323089 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 6 23:05:15.330901 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 6 23:05:15.333522 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 6 23:05:15.334840 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 6 23:05:15.334883 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 6 23:05:15.337113 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Nov 6 23:05:15.339702 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 6 23:05:15.342279 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 6 23:05:15.343556 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:05:15.345024 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 6 23:05:15.348057 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 6 23:05:15.349887 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:05:15.351277 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 6 23:05:15.353073 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:05:15.355986 systemd-journald[1116]: Time spent on flushing to /var/log/journal/0121580f92fe464f9e14201033a2060b is 19.297ms for 870 entries. Nov 6 23:05:15.355986 systemd-journald[1116]: System Journal (/var/log/journal/0121580f92fe464f9e14201033a2060b) is 8M, max 195.6M, 187.6M free. Nov 6 23:05:15.385372 systemd-journald[1116]: Received client request to flush runtime journal. Nov 6 23:05:15.385425 kernel: loop0: detected capacity change from 0 to 207008 Nov 6 23:05:15.358993 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:05:15.363258 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 6 23:05:15.369969 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 6 23:05:15.373545 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 6 23:05:15.378868 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 6 23:05:15.384449 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 6 23:05:15.387539 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 6 23:05:15.391469 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 6 23:05:15.393916 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 6 23:05:15.396441 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 6 23:05:15.401801 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:05:15.411538 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 6 23:05:15.417900 kernel: loop1: detected capacity change from 0 to 123192 Nov 6 23:05:15.421108 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Nov 6 23:05:15.423496 udevadm[1165]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 6 23:05:15.432837 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 6 23:05:15.446929 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 6 23:05:15.447787 kernel: loop2: detected capacity change from 0 to 113512 Nov 6 23:05:15.449406 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 6 23:05:15.450172 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Nov 6 23:05:15.474505 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Nov 6 23:05:15.474523 systemd-tmpfiles[1181]: ACLs are not supported, ignoring. Nov 6 23:05:15.479248 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 6 23:05:15.493807 kernel: loop3: detected capacity change from 0 to 207008 Nov 6 23:05:15.501804 kernel: loop4: detected capacity change from 0 to 123192 Nov 6 23:05:15.507795 kernel: loop5: detected capacity change from 0 to 113512 Nov 6 23:05:15.512999 (sd-merge)[1186]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Nov 6 23:05:15.513502 (sd-merge)[1186]: Merged extensions into '/usr'. Nov 6 23:05:15.517242 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Nov 6 23:05:15.517257 systemd[1]: Reloading... Nov 6 23:05:15.569814 zram_generator::config[1213]: No configuration found. Nov 6 23:05:15.635358 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 6 23:05:15.674608 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:05:15.726024 systemd[1]: Reloading finished in 208 ms. Nov 6 23:05:15.745850 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 6 23:05:15.747568 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 6 23:05:15.762243 systemd[1]: Starting ensure-sysext.service... Nov 6 23:05:15.764347 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 6 23:05:15.774351 systemd[1]: Reload requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Nov 6 23:05:15.774369 systemd[1]: Reloading... Nov 6 23:05:15.786565 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 6 23:05:15.786806 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 6 23:05:15.787422 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 6 23:05:15.787647 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Nov 6 23:05:15.787702 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Nov 6 23:05:15.790275 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:05:15.790290 systemd-tmpfiles[1250]: Skipping /boot Nov 6 23:05:15.799860 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Nov 6 23:05:15.799877 systemd-tmpfiles[1250]: Skipping /boot Nov 6 23:05:15.827802 zram_generator::config[1286]: No configuration found. Nov 6 23:05:15.900517 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:05:15.950651 systemd[1]: Reloading finished in 175 ms. Nov 6 23:05:15.964791 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 6 23:05:15.984226 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 6 23:05:15.992950 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:05:15.995704 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 6 23:05:15.998590 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 6 23:05:16.004275 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 6 23:05:16.009261 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 6 23:05:16.015517 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 6 23:05:16.019718 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:05:16.024124 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:05:16.027062 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:05:16.030745 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:05:16.031944 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:05:16.032074 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:05:16.034021 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 6 23:05:16.038202 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 6 23:05:16.040605 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:05:16.040833 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:05:16.044305 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:05:16.044511 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:05:16.046655 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:05:16.046870 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:05:16.056029 augenrules[1349]: No rules Nov 6 23:05:16.057359 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:05:16.057658 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:05:16.060696 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:05:16.073063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:05:16.078076 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:05:16.078793 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Nov 6 23:05:16.083156 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:05:16.084573 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:05:16.084705 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:05:16.086299 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 6 23:05:16.090313 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 6 23:05:16.093841 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 6 23:05:16.095830 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 6 23:05:16.097961 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:05:16.098126 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:05:16.100038 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:05:16.100212 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:05:16.102261 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:05:16.102447 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:05:16.104461 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 6 23:05:16.106199 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 6 23:05:16.120963 systemd[1]: Finished ensure-sysext.service. Nov 6 23:05:16.135504 systemd-resolved[1320]: Positive Trust Anchors: Nov 6 23:05:16.135536 systemd-resolved[1320]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 6 23:05:16.135567 systemd-resolved[1320]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 6 23:05:16.140206 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:05:16.141357 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 6 23:05:16.142186 systemd-resolved[1320]: Defaulting to hostname 'linux'. Nov 6 23:05:16.143879 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 6 23:05:16.147974 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 6 23:05:16.151080 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 6 23:05:16.154481 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 6 23:05:16.157039 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 6 23:05:16.157105 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Nov 6 23:05:16.159224 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 6 23:05:16.162374 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Nov 6 23:05:16.163645 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 6 23:05:16.165092 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 6 23:05:16.169064 augenrules[1390]: /sbin/augenrules: No change Nov 6 23:05:16.175795 augenrules[1415]: No rules Nov 6 23:05:16.181244 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:05:16.181482 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:05:16.183278 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 6 23:05:16.183470 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 6 23:05:16.185084 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 6 23:05:16.185243 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 6 23:05:16.186740 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 6 23:05:16.187336 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 6 23:05:16.189192 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 6 23:05:16.189366 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 6 23:05:16.197457 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Nov 6 23:05:16.198971 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 6 23:05:16.200374 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 6 23:05:16.200453 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 6 23:05:16.214894 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1387) Nov 6 23:05:16.250398 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Nov 6 23:05:16.259412 systemd-networkd[1401]: lo: Link UP Nov 6 23:05:16.259421 systemd-networkd[1401]: lo: Gained carrier Nov 6 23:05:16.260974 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 6 23:05:16.262079 systemd-networkd[1401]: Enumeration completed Nov 6 23:05:16.262954 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:05:16.263039 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 6 23:05:16.264094 systemd-networkd[1401]: eth0: Link UP Nov 6 23:05:16.264104 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Nov 6 23:05:16.264220 systemd-networkd[1401]: eth0: Gained carrier Nov 6 23:05:16.264282 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 6 23:05:16.265639 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 6 23:05:16.266949 systemd[1]: Reached target network.target - Network. Nov 6 23:05:16.268891 systemd[1]: Reached target time-set.target - System Time Set. Nov 6 23:05:16.273273 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Nov 6 23:05:16.276941 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.7/16, gateway 10.0.0.1 acquired from 10.0.0.1 Nov 6 23:05:16.278257 systemd-timesyncd[1404]: Network configuration changed, trying to establish connection. Nov 6 23:05:16.279871 systemd-timesyncd[1404]: Contacted time server 10.0.0.1:123 (10.0.0.1). Nov 6 23:05:16.279915 systemd-timesyncd[1404]: Initial clock synchronization to Thu 2025-11-06 23:05:16.187798 UTC. Nov 6 23:05:16.283967 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 6 23:05:16.286603 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 6 23:05:16.299681 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Nov 6 23:05:16.338101 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 6 23:05:16.343846 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 6 23:05:16.348406 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 6 23:05:16.361797 lvm[1443]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 6 23:05:16.374862 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 6 23:05:16.390362 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 6 23:05:16.392136 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 6 23:05:16.393424 systemd[1]: Reached target sysinit.target - System Initialization. Nov 6 23:05:16.394755 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 6 23:05:16.396121 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 6 23:05:16.397679 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 6 23:05:16.399036 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 6 23:05:16.400428 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 6 23:05:16.401908 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 6 23:05:16.401948 systemd[1]: Reached target paths.target - Path Units. Nov 6 23:05:16.403107 systemd[1]: Reached target timers.target - Timer Units. Nov 6 23:05:16.405012 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 6 23:05:16.407645 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 6 23:05:16.411538 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Nov 6 23:05:16.413256 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Nov 6 23:05:16.414713 systemd[1]: Reached target ssh-access.target - SSH Access Available. Nov 6 23:05:16.419893 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 6 23:05:16.421535 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Nov 6 23:05:16.424405 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 6 23:05:16.426474 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 6 23:05:16.427989 systemd[1]: Reached target sockets.target - Socket Units. Nov 6 23:05:16.429149 systemd[1]: Reached target basic.target - Basic System. Nov 6 23:05:16.430286 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:05:16.430328 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 6 23:05:16.431506 systemd[1]: Starting containerd.service - containerd container runtime... Nov 6 23:05:16.433427 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 6 23:05:16.435003 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 6 23:05:16.437905 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 6 23:05:16.442054 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 6 23:05:16.443364 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 6 23:05:16.446003 jq[1454]: false Nov 6 23:05:16.447070 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 6 23:05:16.453442 dbus-daemon[1453]: [system] SELinux support is enabled Nov 6 23:05:16.453835 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 6 23:05:16.456941 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 6 23:05:16.459188 extend-filesystems[1455]: Found loop3 Nov 6 23:05:16.460281 extend-filesystems[1455]: Found loop4 Nov 6 23:05:16.460281 extend-filesystems[1455]: Found loop5 Nov 6 23:05:16.460281 extend-filesystems[1455]: Found vda Nov 6 23:05:16.460281 extend-filesystems[1455]: Found vda1 Nov 6 23:05:16.460281 extend-filesystems[1455]: Found vda2 Nov 6 23:05:16.460281 extend-filesystems[1455]: Found vda3 Nov 6 23:05:16.460281 extend-filesystems[1455]: Found usr Nov 6 23:05:16.460281 extend-filesystems[1455]: Found vda4 Nov 6 23:05:16.460281 extend-filesystems[1455]: Found vda6 Nov 6 23:05:16.460281 extend-filesystems[1455]: Found vda7 Nov 6 23:05:16.460281 extend-filesystems[1455]: Found vda9 Nov 6 23:05:16.460281 extend-filesystems[1455]: Checking size of /dev/vda9 Nov 6 23:05:16.480913 extend-filesystems[1455]: Resized partition /dev/vda9 Nov 6 23:05:16.463022 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 6 23:05:16.484956 extend-filesystems[1473]: resize2fs 1.47.1 (20-May-2024) Nov 6 23:05:16.492620 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Nov 6 23:05:16.469066 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 6 23:05:16.471756 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 6 23:05:16.472392 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 6 23:05:16.475016 systemd[1]: Starting update-engine.service - Update Engine... Nov 6 23:05:16.477647 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 6 23:05:16.493641 jq[1476]: true Nov 6 23:05:16.484461 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 6 23:05:16.496717 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 6 23:05:16.500785 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1388) Nov 6 23:05:16.509342 update_engine[1474]: I20251106 23:05:16.509156 1474 main.cc:92] Flatcar Update Engine starting Nov 6 23:05:16.514160 update_engine[1474]: I20251106 23:05:16.511410 1474 update_check_scheduler.cc:74] Next update check in 8m46s Nov 6 23:05:16.511581 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 6 23:05:16.511972 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 6 23:05:16.512500 systemd[1]: motdgen.service: Deactivated successfully. Nov 6 23:05:16.512834 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 6 23:05:16.516289 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 6 23:05:16.516566 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 6 23:05:16.520295 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Nov 6 23:05:16.525906 (ntainerd)[1481]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 6 23:05:16.538563 extend-filesystems[1473]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Nov 6 23:05:16.538563 extend-filesystems[1473]: old_desc_blocks = 1, new_desc_blocks = 1 Nov 6 23:05:16.538563 extend-filesystems[1473]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Nov 6 23:05:16.550019 jq[1480]: true Nov 6 23:05:16.545113 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 6 23:05:16.550310 extend-filesystems[1455]: Resized filesystem in /dev/vda9 Nov 6 23:05:16.545347 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 6 23:05:16.557825 tar[1479]: linux-arm64/LICENSE Nov 6 23:05:16.558093 tar[1479]: linux-arm64/helm Nov 6 23:05:16.558694 systemd[1]: Started update-engine.service - Update Engine. Nov 6 23:05:16.560325 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 6 23:05:16.560366 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 6 23:05:16.562304 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 6 23:05:16.562344 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 6 23:05:16.567982 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 6 23:05:16.570520 systemd-logind[1468]: Watching system buttons on /dev/input/event0 (Power Button) Nov 6 23:05:16.571002 systemd-logind[1468]: New seat seat0. Nov 6 23:05:16.571980 systemd[1]: Started systemd-logind.service - User Login Management. Nov 6 23:05:16.598246 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Nov 6 23:05:16.601826 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 6 23:05:16.604197 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 6 23:05:16.638810 locksmithd[1502]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 6 23:05:16.698936 containerd[1481]: time="2025-11-06T23:05:16.698799000Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Nov 6 23:05:16.743092 containerd[1481]: time="2025-11-06T23:05:16.742994920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:05:16.747277 containerd[1481]: time="2025-11-06T23:05:16.747209200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.113-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:05:16.747277 containerd[1481]: time="2025-11-06T23:05:16.747252160Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 6 23:05:16.747277 containerd[1481]: time="2025-11-06T23:05:16.747269960Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 6 23:05:16.747473 containerd[1481]: time="2025-11-06T23:05:16.747439720Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 6 23:05:16.747473 containerd[1481]: time="2025-11-06T23:05:16.747469360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 6 23:05:16.747621 containerd[1481]: time="2025-11-06T23:05:16.747535520Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:05:16.747621 containerd[1481]: time="2025-11-06T23:05:16.747552480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:05:16.747789 containerd[1481]: time="2025-11-06T23:05:16.747742960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:05:16.747789 containerd[1481]: time="2025-11-06T23:05:16.747763160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 6 23:05:16.747838 containerd[1481]: time="2025-11-06T23:05:16.747792120Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:05:16.747838 containerd[1481]: time="2025-11-06T23:05:16.747802320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 6 23:05:16.747886 containerd[1481]: time="2025-11-06T23:05:16.747877360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:05:16.748108 containerd[1481]: time="2025-11-06T23:05:16.748071920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 6 23:05:16.748214 containerd[1481]: time="2025-11-06T23:05:16.748194920Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 6 23:05:16.748214 containerd[1481]: time="2025-11-06T23:05:16.748212520Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 6 23:05:16.748304 containerd[1481]: time="2025-11-06T23:05:16.748289000Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 6 23:05:16.748374 containerd[1481]: time="2025-11-06T23:05:16.748333920Z" level=info msg="metadata content store policy set" policy=shared Nov 6 23:05:16.751862 containerd[1481]: time="2025-11-06T23:05:16.751823200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 6 23:05:16.751912 containerd[1481]: time="2025-11-06T23:05:16.751881160Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 6 23:05:16.751912 containerd[1481]: time="2025-11-06T23:05:16.751895680Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 6 23:05:16.751960 containerd[1481]: time="2025-11-06T23:05:16.751913160Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 6 23:05:16.751960 containerd[1481]: time="2025-11-06T23:05:16.751927240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 6 23:05:16.752068 containerd[1481]: time="2025-11-06T23:05:16.752050560Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 6 23:05:16.752293 containerd[1481]: time="2025-11-06T23:05:16.752272880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 6 23:05:16.752391 containerd[1481]: time="2025-11-06T23:05:16.752374960Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 6 23:05:16.752413 containerd[1481]: time="2025-11-06T23:05:16.752395960Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 6 23:05:16.752447 containerd[1481]: time="2025-11-06T23:05:16.752411320Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 6 23:05:16.752447 containerd[1481]: time="2025-11-06T23:05:16.752424920Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 6 23:05:16.752483 containerd[1481]: time="2025-11-06T23:05:16.752446520Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 6 23:05:16.752483 containerd[1481]: time="2025-11-06T23:05:16.752458800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 6 23:05:16.752483 containerd[1481]: time="2025-11-06T23:05:16.752471280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 6 23:05:16.752528 containerd[1481]: time="2025-11-06T23:05:16.752484600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 6 23:05:16.752528 containerd[1481]: time="2025-11-06T23:05:16.752497480Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 6 23:05:16.752528 containerd[1481]: time="2025-11-06T23:05:16.752509400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 6 23:05:16.752528 containerd[1481]: time="2025-11-06T23:05:16.752520360Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 6 23:05:16.752600 containerd[1481]: time="2025-11-06T23:05:16.752539000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.752600 containerd[1481]: time="2025-11-06T23:05:16.752551720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.752600 containerd[1481]: time="2025-11-06T23:05:16.752563400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.752600 containerd[1481]: time="2025-11-06T23:05:16.752574800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.752600 containerd[1481]: time="2025-11-06T23:05:16.752586280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.752683 containerd[1481]: time="2025-11-06T23:05:16.752606080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.752683 containerd[1481]: time="2025-11-06T23:05:16.752618880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.752683 containerd[1481]: time="2025-11-06T23:05:16.752630640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.752683 containerd[1481]: time="2025-11-06T23:05:16.752642720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.752683 containerd[1481]: time="2025-11-06T23:05:16.752655840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.752683 containerd[1481]: time="2025-11-06T23:05:16.752668560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.752683 containerd[1481]: time="2025-11-06T23:05:16.752679240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.752817 containerd[1481]: time="2025-11-06T23:05:16.752690600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.752817 containerd[1481]: time="2025-11-06T23:05:16.752705160Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 6 23:05:16.752817 containerd[1481]: time="2025-11-06T23:05:16.752724600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.752817 containerd[1481]: time="2025-11-06T23:05:16.752748160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.752817 containerd[1481]: time="2025-11-06T23:05:16.752759480Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 6 23:05:16.754549 containerd[1481]: time="2025-11-06T23:05:16.753553360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 6 23:05:16.754549 containerd[1481]: time="2025-11-06T23:05:16.753597960Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 6 23:05:16.754549 containerd[1481]: time="2025-11-06T23:05:16.753613760Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 6 23:05:16.754549 containerd[1481]: time="2025-11-06T23:05:16.753630720Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 6 23:05:16.754549 containerd[1481]: time="2025-11-06T23:05:16.753641240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.754549 containerd[1481]: time="2025-11-06T23:05:16.753658800Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 6 23:05:16.754549 containerd[1481]: time="2025-11-06T23:05:16.753673120Z" level=info msg="NRI interface is disabled by configuration." Nov 6 23:05:16.754549 containerd[1481]: time="2025-11-06T23:05:16.753683680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 6 23:05:16.754730 containerd[1481]: time="2025-11-06T23:05:16.754061080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 6 23:05:16.754730 containerd[1481]: time="2025-11-06T23:05:16.754112960Z" level=info msg="Connect containerd service" Nov 6 23:05:16.754730 containerd[1481]: time="2025-11-06T23:05:16.754147000Z" level=info msg="using legacy CRI server" Nov 6 23:05:16.754730 containerd[1481]: time="2025-11-06T23:05:16.754158440Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 6 23:05:16.755973 containerd[1481]: time="2025-11-06T23:05:16.755532440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 6 23:05:16.757091 containerd[1481]: time="2025-11-06T23:05:16.757064680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:05:16.757605 containerd[1481]: time="2025-11-06T23:05:16.757576080Z" level=info msg="Start subscribing containerd event" Nov 6 23:05:16.757739 containerd[1481]: time="2025-11-06T23:05:16.757723960Z" level=info msg="Start recovering state" Nov 6 23:05:16.757864 containerd[1481]: time="2025-11-06T23:05:16.757849360Z" level=info msg="Start event monitor" Nov 6 23:05:16.758301 containerd[1481]: time="2025-11-06T23:05:16.758284000Z" level=info msg="Start snapshots syncer" Nov 6 23:05:16.758420 containerd[1481]: time="2025-11-06T23:05:16.758405440Z" level=info msg="Start cni network conf syncer for default" Nov 6 23:05:16.758597 containerd[1481]: time="2025-11-06T23:05:16.758467400Z" level=info msg="Start streaming server" Nov 6 23:05:16.758737 containerd[1481]: time="2025-11-06T23:05:16.758172400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 6 23:05:16.758927 containerd[1481]: time="2025-11-06T23:05:16.758910240Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 6 23:05:16.759165 containerd[1481]: time="2025-11-06T23:05:16.759150040Z" level=info msg="containerd successfully booted in 0.061801s" Nov 6 23:05:16.759232 systemd[1]: Started containerd.service - containerd container runtime. Nov 6 23:05:16.937863 tar[1479]: linux-arm64/README.md Nov 6 23:05:16.955634 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 6 23:05:17.354017 sshd_keygen[1475]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 6 23:05:17.372202 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 6 23:05:17.381027 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 6 23:05:17.386095 systemd[1]: issuegen.service: Deactivated successfully. Nov 6 23:05:17.386302 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 6 23:05:17.388928 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 6 23:05:17.399222 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 6 23:05:17.402079 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 6 23:05:17.404260 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Nov 6 23:05:17.405594 systemd[1]: Reached target getty.target - Login Prompts. Nov 6 23:05:17.873898 systemd-networkd[1401]: eth0: Gained IPv6LL Nov 6 23:05:17.876588 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 6 23:05:17.878383 systemd[1]: Reached target network-online.target - Network is Online. Nov 6 23:05:17.889043 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Nov 6 23:05:17.891360 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:05:17.893373 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 6 23:05:17.906272 systemd[1]: coreos-metadata.service: Deactivated successfully. Nov 6 23:05:17.906476 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Nov 6 23:05:17.908269 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 6 23:05:17.913022 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 6 23:05:18.425450 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:05:18.427073 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 6 23:05:18.428699 (kubelet)[1568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:05:18.431894 systemd[1]: Startup finished in 531ms (kernel) + 5.965s (initrd) + 3.821s (userspace) = 10.318s. Nov 6 23:05:18.765509 kubelet[1568]: E1106 23:05:18.765382 1568 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:05:18.767952 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:05:18.768085 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:05:18.769310 systemd[1]: kubelet.service: Consumed 739ms CPU time, 259.7M memory peak. Nov 6 23:05:21.554348 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 6 23:05:21.555637 systemd[1]: Started sshd@0-10.0.0.7:22-10.0.0.1:47906.service - OpenSSH per-connection server daemon (10.0.0.1:47906). Nov 6 23:05:21.606300 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 47906 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:05:21.607860 sshd-session[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:05:21.613528 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 6 23:05:21.619989 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 6 23:05:21.625088 systemd-logind[1468]: New session 1 of user core. Nov 6 23:05:21.629917 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 6 23:05:21.634068 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 6 23:05:21.638985 (systemd)[1586]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 6 23:05:21.641140 systemd-logind[1468]: New session c1 of user core. Nov 6 23:05:21.752379 systemd[1586]: Queued start job for default target default.target. Nov 6 23:05:21.761991 systemd[1586]: Created slice app.slice - User Application Slice. Nov 6 23:05:21.762022 systemd[1586]: Reached target paths.target - Paths. Nov 6 23:05:21.762060 systemd[1586]: Reached target timers.target - Timers. Nov 6 23:05:21.763316 systemd[1586]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 6 23:05:21.771650 systemd[1586]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 6 23:05:21.771716 systemd[1586]: Reached target sockets.target - Sockets. Nov 6 23:05:21.771752 systemd[1586]: Reached target basic.target - Basic System. Nov 6 23:05:21.771799 systemd[1586]: Reached target default.target - Main User Target. Nov 6 23:05:21.771823 systemd[1586]: Startup finished in 125ms. Nov 6 23:05:21.771986 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 6 23:05:21.779946 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 6 23:05:21.840269 systemd[1]: Started sshd@1-10.0.0.7:22-10.0.0.1:47914.service - OpenSSH per-connection server daemon (10.0.0.1:47914). Nov 6 23:05:21.887264 sshd[1597]: Accepted publickey for core from 10.0.0.1 port 47914 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:05:21.888496 sshd-session[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:05:21.892629 systemd-logind[1468]: New session 2 of user core. Nov 6 23:05:21.901929 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 6 23:05:21.952813 sshd[1599]: Connection closed by 10.0.0.1 port 47914 Nov 6 23:05:21.952613 sshd-session[1597]: pam_unix(sshd:session): session closed for user core Nov 6 23:05:21.966235 systemd[1]: sshd@1-10.0.0.7:22-10.0.0.1:47914.service: Deactivated successfully. Nov 6 23:05:21.967587 systemd[1]: session-2.scope: Deactivated successfully. Nov 6 23:05:21.968246 systemd-logind[1468]: Session 2 logged out. Waiting for processes to exit. Nov 6 23:05:21.982787 systemd[1]: Started sshd@2-10.0.0.7:22-10.0.0.1:47928.service - OpenSSH per-connection server daemon (10.0.0.1:47928). Nov 6 23:05:21.984090 systemd-logind[1468]: Removed session 2. Nov 6 23:05:22.018757 sshd[1604]: Accepted publickey for core from 10.0.0.1 port 47928 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:05:22.019947 sshd-session[1604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:05:22.023830 systemd-logind[1468]: New session 3 of user core. Nov 6 23:05:22.033958 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 6 23:05:22.085666 sshd[1607]: Connection closed by 10.0.0.1 port 47928 Nov 6 23:05:22.086001 sshd-session[1604]: pam_unix(sshd:session): session closed for user core Nov 6 23:05:22.110812 systemd[1]: sshd@2-10.0.0.7:22-10.0.0.1:47928.service: Deactivated successfully. Nov 6 23:05:22.112411 systemd[1]: session-3.scope: Deactivated successfully. Nov 6 23:05:22.113623 systemd-logind[1468]: Session 3 logged out. Waiting for processes to exit. Nov 6 23:05:22.115201 systemd[1]: Started sshd@3-10.0.0.7:22-10.0.0.1:47940.service - OpenSSH per-connection server daemon (10.0.0.1:47940). Nov 6 23:05:22.116284 systemd-logind[1468]: Removed session 3. Nov 6 23:05:22.153025 sshd[1612]: Accepted publickey for core from 10.0.0.1 port 47940 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:05:22.154159 sshd-session[1612]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:05:22.158679 systemd-logind[1468]: New session 4 of user core. Nov 6 23:05:22.164898 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 6 23:05:22.215817 sshd[1615]: Connection closed by 10.0.0.1 port 47940 Nov 6 23:05:22.216256 sshd-session[1612]: pam_unix(sshd:session): session closed for user core Nov 6 23:05:22.228394 systemd[1]: sshd@3-10.0.0.7:22-10.0.0.1:47940.service: Deactivated successfully. Nov 6 23:05:22.229758 systemd[1]: session-4.scope: Deactivated successfully. Nov 6 23:05:22.232022 systemd-logind[1468]: Session 4 logged out. Waiting for processes to exit. Nov 6 23:05:22.232687 systemd[1]: Started sshd@4-10.0.0.7:22-10.0.0.1:47956.service - OpenSSH per-connection server daemon (10.0.0.1:47956). Nov 6 23:05:22.233401 systemd-logind[1468]: Removed session 4. Nov 6 23:05:22.270069 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 47956 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:05:22.271206 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:05:22.274953 systemd-logind[1468]: New session 5 of user core. Nov 6 23:05:22.281913 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 6 23:05:22.338737 sudo[1624]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 6 23:05:22.339042 sudo[1624]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:05:22.371512 sudo[1624]: pam_unix(sudo:session): session closed for user root Nov 6 23:05:22.372844 sshd[1623]: Connection closed by 10.0.0.1 port 47956 Nov 6 23:05:22.373381 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Nov 6 23:05:22.385154 systemd[1]: sshd@4-10.0.0.7:22-10.0.0.1:47956.service: Deactivated successfully. Nov 6 23:05:22.387976 systemd[1]: session-5.scope: Deactivated successfully. Nov 6 23:05:22.388608 systemd-logind[1468]: Session 5 logged out. Waiting for processes to exit. Nov 6 23:05:22.390403 systemd[1]: Started sshd@5-10.0.0.7:22-10.0.0.1:47964.service - OpenSSH per-connection server daemon (10.0.0.1:47964). Nov 6 23:05:22.391232 systemd-logind[1468]: Removed session 5. Nov 6 23:05:22.428683 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 47964 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:05:22.429946 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:05:22.433845 systemd-logind[1468]: New session 6 of user core. Nov 6 23:05:22.442917 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 6 23:05:22.492400 sudo[1634]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 6 23:05:22.492672 sudo[1634]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:05:22.495494 sudo[1634]: pam_unix(sudo:session): session closed for user root Nov 6 23:05:22.499786 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Nov 6 23:05:22.500049 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:05:22.523217 systemd[1]: Starting audit-rules.service - Load Audit Rules... Nov 6 23:05:22.544549 augenrules[1656]: No rules Nov 6 23:05:22.545646 systemd[1]: audit-rules.service: Deactivated successfully. Nov 6 23:05:22.546848 systemd[1]: Finished audit-rules.service - Load Audit Rules. Nov 6 23:05:22.547956 sudo[1633]: pam_unix(sudo:session): session closed for user root Nov 6 23:05:22.550081 sshd[1632]: Connection closed by 10.0.0.1 port 47964 Nov 6 23:05:22.549968 sshd-session[1629]: pam_unix(sshd:session): session closed for user core Nov 6 23:05:22.558751 systemd[1]: sshd@5-10.0.0.7:22-10.0.0.1:47964.service: Deactivated successfully. Nov 6 23:05:22.560126 systemd[1]: session-6.scope: Deactivated successfully. Nov 6 23:05:22.562003 systemd-logind[1468]: Session 6 logged out. Waiting for processes to exit. Nov 6 23:05:22.574041 systemd[1]: Started sshd@6-10.0.0.7:22-10.0.0.1:47978.service - OpenSSH per-connection server daemon (10.0.0.1:47978). Nov 6 23:05:22.575225 systemd-logind[1468]: Removed session 6. Nov 6 23:05:22.608434 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 47978 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:05:22.609489 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:05:22.613510 systemd-logind[1468]: New session 7 of user core. Nov 6 23:05:22.619923 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 6 23:05:22.670991 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 6 23:05:22.671604 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 6 23:05:22.950064 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 6 23:05:22.950145 (dockerd)[1687]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 6 23:05:23.150473 dockerd[1687]: time="2025-11-06T23:05:23.150409949Z" level=info msg="Starting up" Nov 6 23:05:23.385327 dockerd[1687]: time="2025-11-06T23:05:23.384934669Z" level=info msg="Loading containers: start." Nov 6 23:05:23.524796 kernel: Initializing XFRM netlink socket Nov 6 23:05:23.594414 systemd-networkd[1401]: docker0: Link UP Nov 6 23:05:23.632091 dockerd[1687]: time="2025-11-06T23:05:23.632024140Z" level=info msg="Loading containers: done." Nov 6 23:05:23.644103 dockerd[1687]: time="2025-11-06T23:05:23.643991432Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 6 23:05:23.644103 dockerd[1687]: time="2025-11-06T23:05:23.644081798Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Nov 6 23:05:23.644381 dockerd[1687]: time="2025-11-06T23:05:23.644255717Z" level=info msg="Daemon has completed initialization" Nov 6 23:05:23.671781 dockerd[1687]: time="2025-11-06T23:05:23.671719480Z" level=info msg="API listen on /run/docker.sock" Nov 6 23:05:23.671937 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 6 23:05:24.176150 containerd[1481]: time="2025-11-06T23:05:24.176095561Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\"" Nov 6 23:05:24.813460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439933333.mount: Deactivated successfully. Nov 6 23:05:25.861791 containerd[1481]: time="2025-11-06T23:05:25.861726020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:25.862840 containerd[1481]: time="2025-11-06T23:05:25.862816229Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.9: active requests=0, bytes read=26363687" Nov 6 23:05:25.863239 containerd[1481]: time="2025-11-06T23:05:25.863212347Z" level=info msg="ImageCreate event name:\"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:25.866134 containerd[1481]: time="2025-11-06T23:05:25.866097022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:25.867519 containerd[1481]: time="2025-11-06T23:05:25.867356887Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.9\" with image id \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:6df11cc2ad9679b1117be34d3a0230add88bc0a08fd7a3ebc26b680575e8de97\", size \"26360284\" in 1.691219s" Nov 6 23:05:25.867519 containerd[1481]: time="2025-11-06T23:05:25.867389734Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.9\" returns image reference \"sha256:02ea53851f07db91ed471dab1ab11541f5c294802371cd8f0cfd423cd5c71002\"" Nov 6 23:05:25.868354 containerd[1481]: time="2025-11-06T23:05:25.868324199Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\"" Nov 6 23:05:27.176606 containerd[1481]: time="2025-11-06T23:05:27.176547781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:27.177054 containerd[1481]: time="2025-11-06T23:05:27.177018120Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.9: active requests=0, bytes read=22531202" Nov 6 23:05:27.177859 containerd[1481]: time="2025-11-06T23:05:27.177824961Z" level=info msg="ImageCreate event name:\"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:27.181861 containerd[1481]: time="2025-11-06T23:05:27.181825190Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:27.183375 containerd[1481]: time="2025-11-06T23:05:27.183107474Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.9\" with image id \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:243c4b8e3bce271fcb1b78008ab996ab6976b1a20096deac08338fcd17979922\", size \"24099975\" in 1.314747833s" Nov 6 23:05:27.183375 containerd[1481]: time="2025-11-06T23:05:27.183139497Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.9\" returns image reference \"sha256:f0bcbad5082c944520b370596a2384affda710b9d7daf84e8a48352699af8e4b\"" Nov 6 23:05:27.183540 containerd[1481]: time="2025-11-06T23:05:27.183515960Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\"" Nov 6 23:05:28.472651 containerd[1481]: time="2025-11-06T23:05:28.471489367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:28.472651 containerd[1481]: time="2025-11-06T23:05:28.472599741Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.9: active requests=0, bytes read=17484326" Nov 6 23:05:28.473032 containerd[1481]: time="2025-11-06T23:05:28.472886170Z" level=info msg="ImageCreate event name:\"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:28.476737 containerd[1481]: time="2025-11-06T23:05:28.476699127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:28.477890 containerd[1481]: time="2025-11-06T23:05:28.477858961Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.9\" with image id \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:50c49520dbd0e8b4076b6a5c77d8014df09ea3d59a73e8bafd2678d51ebb92d5\", size \"19053117\" in 1.294311176s" Nov 6 23:05:28.477890 containerd[1481]: time="2025-11-06T23:05:28.477889076Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.9\" returns image reference \"sha256:1d625baf81b59592006d97a6741bc947698ed222b612ac10efa57b7aa96d2a27\"" Nov 6 23:05:28.478491 containerd[1481]: time="2025-11-06T23:05:28.478322009Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\"" Nov 6 23:05:29.018405 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 6 23:05:29.035080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:05:29.168838 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:05:29.172571 (kubelet)[1961]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 6 23:05:29.210158 kubelet[1961]: E1106 23:05:29.210098 1961 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 6 23:05:29.213214 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 6 23:05:29.213354 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 6 23:05:29.214901 systemd[1]: kubelet.service: Consumed 138ms CPU time, 109.7M memory peak. Nov 6 23:05:29.804778 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount626741646.mount: Deactivated successfully. Nov 6 23:05:30.023959 containerd[1481]: time="2025-11-06T23:05:30.023903994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:30.025121 containerd[1481]: time="2025-11-06T23:05:30.025073642Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.9: active requests=0, bytes read=27417819" Nov 6 23:05:30.026032 containerd[1481]: time="2025-11-06T23:05:30.025984854Z" level=info msg="ImageCreate event name:\"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:30.028248 containerd[1481]: time="2025-11-06T23:05:30.028199462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:30.028860 containerd[1481]: time="2025-11-06T23:05:30.028836675Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.9\" with image id \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\", repo tag \"registry.k8s.io/kube-proxy:v1.32.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:886af02535dc34886e4618b902f8c140d89af57233a245621d29642224516064\", size \"27416836\" in 1.550478204s" Nov 6 23:05:30.028896 containerd[1481]: time="2025-11-06T23:05:30.028866322Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.9\" returns image reference \"sha256:72b57ec14d31e8422925ef4c3eff44822cdc04a11fd30d13824f1897d83a16d4\"" Nov 6 23:05:30.029431 containerd[1481]: time="2025-11-06T23:05:30.029407175Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Nov 6 23:05:30.610694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2094363160.mount: Deactivated successfully. Nov 6 23:05:31.588844 containerd[1481]: time="2025-11-06T23:05:31.588530052Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:31.590078 containerd[1481]: time="2025-11-06T23:05:31.589781931Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Nov 6 23:05:31.590857 containerd[1481]: time="2025-11-06T23:05:31.590830844Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:31.594717 containerd[1481]: time="2025-11-06T23:05:31.594666175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:31.596614 containerd[1481]: time="2025-11-06T23:05:31.595866454Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.566424244s" Nov 6 23:05:31.596614 containerd[1481]: time="2025-11-06T23:05:31.595901093Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Nov 6 23:05:31.596614 containerd[1481]: time="2025-11-06T23:05:31.596318639Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Nov 6 23:05:32.054031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1104610874.mount: Deactivated successfully. Nov 6 23:05:32.056831 containerd[1481]: time="2025-11-06T23:05:32.056794650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:32.057796 containerd[1481]: time="2025-11-06T23:05:32.057730922Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Nov 6 23:05:32.058564 containerd[1481]: time="2025-11-06T23:05:32.058533088Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:32.060754 containerd[1481]: time="2025-11-06T23:05:32.060728287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:32.061973 containerd[1481]: time="2025-11-06T23:05:32.061836702Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 465.404886ms" Nov 6 23:05:32.061973 containerd[1481]: time="2025-11-06T23:05:32.061876296Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Nov 6 23:05:32.062343 containerd[1481]: time="2025-11-06T23:05:32.062298892Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Nov 6 23:05:32.584992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1931792093.mount: Deactivated successfully. Nov 6 23:05:34.457505 containerd[1481]: time="2025-11-06T23:05:34.457450350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:34.458231 containerd[1481]: time="2025-11-06T23:05:34.458160466Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943167" Nov 6 23:05:34.458905 containerd[1481]: time="2025-11-06T23:05:34.458876370Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:34.462071 containerd[1481]: time="2025-11-06T23:05:34.462042565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:34.463423 containerd[1481]: time="2025-11-06T23:05:34.463397881Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.401071327s" Nov 6 23:05:34.463470 containerd[1481]: time="2025-11-06T23:05:34.463433891Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Nov 6 23:05:39.284005 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 6 23:05:39.293035 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:05:39.302725 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 6 23:05:39.302950 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 6 23:05:39.303195 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:05:39.314202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:05:39.335302 systemd[1]: Reload requested from client PID 2124 ('systemctl') (unit session-7.scope)... Nov 6 23:05:39.335317 systemd[1]: Reloading... Nov 6 23:05:39.415799 zram_generator::config[2174]: No configuration found. Nov 6 23:05:39.529629 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:05:39.602509 systemd[1]: Reloading finished in 266 ms. Nov 6 23:05:39.642320 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:05:39.644720 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:05:39.645867 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 23:05:39.646070 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:05:39.646108 systemd[1]: kubelet.service: Consumed 84ms CPU time, 95.1M memory peak. Nov 6 23:05:39.647510 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:05:39.749994 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:05:39.754058 (kubelet)[2215]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:05:39.785924 kubelet[2215]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:05:39.785924 kubelet[2215]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:05:39.785924 kubelet[2215]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:05:39.786275 kubelet[2215]: I1106 23:05:39.785994 2215 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:05:40.859495 kubelet[2215]: I1106 23:05:40.859454 2215 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 6 23:05:40.860040 kubelet[2215]: I1106 23:05:40.859928 2215 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:05:40.860730 kubelet[2215]: I1106 23:05:40.860606 2215 server.go:954] "Client rotation is on, will bootstrap in background" Nov 6 23:05:40.876886 kubelet[2215]: E1106 23:05:40.876843 2215 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.7:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:05:40.878894 kubelet[2215]: I1106 23:05:40.878865 2215 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:05:40.883381 kubelet[2215]: E1106 23:05:40.883355 2215 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 6 23:05:40.883381 kubelet[2215]: I1106 23:05:40.883382 2215 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 6 23:05:40.886352 kubelet[2215]: I1106 23:05:40.886092 2215 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 23:05:40.887346 kubelet[2215]: I1106 23:05:40.887304 2215 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:05:40.887586 kubelet[2215]: I1106 23:05:40.887425 2215 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:05:40.887809 kubelet[2215]: I1106 23:05:40.887794 2215 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:05:40.887873 kubelet[2215]: I1106 23:05:40.887865 2215 container_manager_linux.go:304] "Creating device plugin manager" Nov 6 23:05:40.888108 kubelet[2215]: I1106 23:05:40.888094 2215 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:05:40.890729 kubelet[2215]: I1106 23:05:40.890707 2215 kubelet.go:446] "Attempting to sync node with API server" Nov 6 23:05:40.890852 kubelet[2215]: I1106 23:05:40.890839 2215 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:05:40.890967 kubelet[2215]: I1106 23:05:40.890955 2215 kubelet.go:352] "Adding apiserver pod source" Nov 6 23:05:40.891034 kubelet[2215]: I1106 23:05:40.891024 2215 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:05:40.895366 kubelet[2215]: W1106 23:05:40.894667 2215 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Nov 6 23:05:40.895366 kubelet[2215]: E1106 23:05:40.894727 2215 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:05:40.895366 kubelet[2215]: I1106 23:05:40.894830 2215 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 6 23:05:40.895661 kubelet[2215]: W1106 23:05:40.895620 2215 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Nov 6 23:05:40.895701 kubelet[2215]: E1106 23:05:40.895665 2215 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:05:40.895806 kubelet[2215]: I1106 23:05:40.895790 2215 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 6 23:05:40.896058 kubelet[2215]: W1106 23:05:40.896041 2215 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 6 23:05:40.898837 kubelet[2215]: I1106 23:05:40.898815 2215 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 23:05:40.898893 kubelet[2215]: I1106 23:05:40.898855 2215 server.go:1287] "Started kubelet" Nov 6 23:05:40.899340 kubelet[2215]: I1106 23:05:40.899242 2215 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:05:40.902202 kubelet[2215]: I1106 23:05:40.902164 2215 server.go:479] "Adding debug handlers to kubelet server" Nov 6 23:05:40.902716 kubelet[2215]: I1106 23:05:40.902695 2215 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:05:40.903116 kubelet[2215]: E1106 23:05:40.902797 2215 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18758d6fe46fbb10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 23:05:40.89883112 +0000 UTC m=+1.141917130,LastTimestamp:2025-11-06 23:05:40.89883112 +0000 UTC m=+1.141917130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 23:05:40.903208 kubelet[2215]: I1106 23:05:40.903189 2215 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:05:40.905178 kubelet[2215]: I1106 23:05:40.905121 2215 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:05:40.905324 kubelet[2215]: E1106 23:05:40.905306 2215 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:05:40.905355 kubelet[2215]: I1106 23:05:40.905338 2215 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 23:05:40.905487 kubelet[2215]: I1106 23:05:40.905472 2215 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 23:05:40.905546 kubelet[2215]: I1106 23:05:40.905535 2215 reconciler.go:26] "Reconciler: start to sync state" Nov 6 23:05:40.905888 kubelet[2215]: W1106 23:05:40.905835 2215 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Nov 6 23:05:40.905950 kubelet[2215]: E1106 23:05:40.905893 2215 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:05:40.905950 kubelet[2215]: E1106 23:05:40.905842 2215 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="200ms" Nov 6 23:05:40.905950 kubelet[2215]: I1106 23:05:40.905910 2215 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:05:40.906217 kubelet[2215]: I1106 23:05:40.906192 2215 factory.go:221] Registration of the systemd container factory successfully Nov 6 23:05:40.906428 kubelet[2215]: I1106 23:05:40.906268 2215 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:05:40.906526 kubelet[2215]: E1106 23:05:40.906487 2215 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 23:05:40.907215 kubelet[2215]: I1106 23:05:40.907194 2215 factory.go:221] Registration of the containerd container factory successfully Nov 6 23:05:40.917229 kubelet[2215]: I1106 23:05:40.917098 2215 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 6 23:05:40.918142 kubelet[2215]: I1106 23:05:40.918107 2215 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 6 23:05:40.918142 kubelet[2215]: I1106 23:05:40.918132 2215 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 6 23:05:40.918237 kubelet[2215]: I1106 23:05:40.918150 2215 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:05:40.918237 kubelet[2215]: I1106 23:05:40.918158 2215 kubelet.go:2382] "Starting kubelet main sync loop" Nov 6 23:05:40.918237 kubelet[2215]: E1106 23:05:40.918191 2215 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 23:05:40.920569 kubelet[2215]: W1106 23:05:40.920541 2215 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Nov 6 23:05:40.920644 kubelet[2215]: E1106 23:05:40.920576 2215 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:05:40.920644 kubelet[2215]: I1106 23:05:40.920553 2215 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:05:40.920644 kubelet[2215]: I1106 23:05:40.920595 2215 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:05:40.920644 kubelet[2215]: I1106 23:05:40.920609 2215 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:05:41.006285 kubelet[2215]: E1106 23:05:41.006249 2215 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:05:41.017145 kubelet[2215]: I1106 23:05:41.017112 2215 policy_none.go:49] "None policy: Start" Nov 6 23:05:41.017145 kubelet[2215]: I1106 23:05:41.017141 2215 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 23:05:41.017212 kubelet[2215]: I1106 23:05:41.017156 2215 state_mem.go:35] "Initializing new in-memory state store" Nov 6 23:05:41.018429 kubelet[2215]: E1106 23:05:41.018402 2215 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 6 23:05:41.023384 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 6 23:05:41.034695 kubelet[2215]: E1106 23:05:41.034452 2215 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.7:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.7:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18758d6fe46fbb10 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-11-06 23:05:40.89883112 +0000 UTC m=+1.141917130,LastTimestamp:2025-11-06 23:05:40.89883112 +0000 UTC m=+1.141917130,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Nov 6 23:05:41.034939 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 6 23:05:41.038304 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 6 23:05:41.053659 kubelet[2215]: I1106 23:05:41.053552 2215 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 6 23:05:41.053751 kubelet[2215]: I1106 23:05:41.053732 2215 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:05:41.053798 kubelet[2215]: I1106 23:05:41.053751 2215 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:05:41.053994 kubelet[2215]: I1106 23:05:41.053979 2215 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:05:41.055277 kubelet[2215]: E1106 23:05:41.055251 2215 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:05:41.055342 kubelet[2215]: E1106 23:05:41.055296 2215 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Nov 6 23:05:41.106424 kubelet[2215]: E1106 23:05:41.106384 2215 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="400ms" Nov 6 23:05:41.155369 kubelet[2215]: I1106 23:05:41.155252 2215 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:05:41.155811 kubelet[2215]: E1106 23:05:41.155729 2215 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Nov 6 23:05:41.226681 systemd[1]: Created slice kubepods-burstable-poda8b9ad4d83b93b8c21157880c3cca405.slice - libcontainer container kubepods-burstable-poda8b9ad4d83b93b8c21157880c3cca405.slice. Nov 6 23:05:41.236445 kubelet[2215]: E1106 23:05:41.236409 2215 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:05:41.237890 systemd[1]: Created slice kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice - libcontainer container kubepods-burstable-pod4654b122dbb389158fe3c0766e603624.slice. Nov 6 23:05:41.248787 kubelet[2215]: E1106 23:05:41.248749 2215 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:05:41.250985 systemd[1]: Created slice kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice - libcontainer container kubepods-burstable-poda1d51be1ff02022474f2598f6e43038f.slice. Nov 6 23:05:41.252317 kubelet[2215]: E1106 23:05:41.252288 2215 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:05:41.308237 kubelet[2215]: I1106 23:05:41.308193 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8b9ad4d83b93b8c21157880c3cca405-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a8b9ad4d83b93b8c21157880c3cca405\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:05:41.308374 kubelet[2215]: I1106 23:05:41.308359 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:05:41.308446 kubelet[2215]: I1106 23:05:41.308432 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 6 23:05:41.308509 kubelet[2215]: I1106 23:05:41.308497 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8b9ad4d83b93b8c21157880c3cca405-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8b9ad4d83b93b8c21157880c3cca405\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:05:41.308716 kubelet[2215]: I1106 23:05:41.308562 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8b9ad4d83b93b8c21157880c3cca405-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8b9ad4d83b93b8c21157880c3cca405\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:05:41.308716 kubelet[2215]: I1106 23:05:41.308581 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:05:41.308716 kubelet[2215]: I1106 23:05:41.308595 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:05:41.308716 kubelet[2215]: I1106 23:05:41.308610 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:05:41.308716 kubelet[2215]: I1106 23:05:41.308626 2215 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:05:41.356673 kubelet[2215]: I1106 23:05:41.356633 2215 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:05:41.356963 kubelet[2215]: E1106 23:05:41.356934 2215 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Nov 6 23:05:41.507162 kubelet[2215]: E1106 23:05:41.507064 2215 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="800ms" Nov 6 23:05:41.537502 kubelet[2215]: E1106 23:05:41.537473 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:41.538134 containerd[1481]: time="2025-11-06T23:05:41.538082494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a8b9ad4d83b93b8c21157880c3cca405,Namespace:kube-system,Attempt:0,}" Nov 6 23:05:41.549506 kubelet[2215]: E1106 23:05:41.549475 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:41.550032 containerd[1481]: time="2025-11-06T23:05:41.549813115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,}" Nov 6 23:05:41.553291 kubelet[2215]: E1106 23:05:41.553255 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:41.553603 containerd[1481]: time="2025-11-06T23:05:41.553578273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,}" Nov 6 23:05:41.758982 kubelet[2215]: I1106 23:05:41.758888 2215 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:05:41.759248 kubelet[2215]: E1106 23:05:41.759203 2215 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.7:6443/api/v1/nodes\": dial tcp 10.0.0.7:6443: connect: connection refused" node="localhost" Nov 6 23:05:42.114628 kubelet[2215]: W1106 23:05:42.114482 2215 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Nov 6 23:05:42.114628 kubelet[2215]: E1106 23:05:42.114554 2215 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.7:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:05:42.122073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3212006538.mount: Deactivated successfully. Nov 6 23:05:42.127307 containerd[1481]: time="2025-11-06T23:05:42.127263482Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:05:42.128900 containerd[1481]: time="2025-11-06T23:05:42.128857655Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 6 23:05:42.129608 containerd[1481]: time="2025-11-06T23:05:42.129569839Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:05:42.130692 kubelet[2215]: W1106 23:05:42.130617 2215 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Nov 6 23:05:42.130692 kubelet[2215]: E1106 23:05:42.130669 2215 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.7:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:05:42.131119 containerd[1481]: time="2025-11-06T23:05:42.131084384Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:05:42.131986 containerd[1481]: time="2025-11-06T23:05:42.131937486Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:05:42.132456 containerd[1481]: time="2025-11-06T23:05:42.132429322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Nov 6 23:05:42.133187 kubelet[2215]: W1106 23:05:42.133113 2215 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Nov 6 23:05:42.133187 kubelet[2215]: E1106 23:05:42.133156 2215 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.7:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:05:42.133591 containerd[1481]: time="2025-11-06T23:05:42.133250661Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 6 23:05:42.136367 containerd[1481]: time="2025-11-06T23:05:42.134554128Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 6 23:05:42.137809 containerd[1481]: time="2025-11-06T23:05:42.137780830Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 587.899717ms" Nov 6 23:05:42.138625 containerd[1481]: time="2025-11-06T23:05:42.138562575Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 584.931366ms" Nov 6 23:05:42.142564 containerd[1481]: time="2025-11-06T23:05:42.142432021Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 604.245613ms" Nov 6 23:05:42.256266 kubelet[2215]: W1106 23:05:42.256161 2215 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.7:6443: connect: connection refused Nov 6 23:05:42.256266 kubelet[2215]: E1106 23:05:42.256227 2215 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.7:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.7:6443: connect: connection refused" logger="UnhandledError" Nov 6 23:05:42.275166 containerd[1481]: time="2025-11-06T23:05:42.274910458Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:05:42.275166 containerd[1481]: time="2025-11-06T23:05:42.274987130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:05:42.275166 containerd[1481]: time="2025-11-06T23:05:42.275008785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:05:42.275166 containerd[1481]: time="2025-11-06T23:05:42.275112986Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:05:42.275348 containerd[1481]: time="2025-11-06T23:05:42.275295776Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:05:42.275398 containerd[1481]: time="2025-11-06T23:05:42.275366375Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:05:42.275398 containerd[1481]: time="2025-11-06T23:05:42.275382517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:05:42.275506 containerd[1481]: time="2025-11-06T23:05:42.275474252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:05:42.277725 containerd[1481]: time="2025-11-06T23:05:42.276326195Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:05:42.277725 containerd[1481]: time="2025-11-06T23:05:42.276378975Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:05:42.277725 containerd[1481]: time="2025-11-06T23:05:42.276394078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:05:42.277725 containerd[1481]: time="2025-11-06T23:05:42.276853431Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:05:42.293946 systemd[1]: Started cri-containerd-4ad8235448e936d1a33e7425ac9ef7f7d58a77f37d5e22b2e705ec988965374e.scope - libcontainer container 4ad8235448e936d1a33e7425ac9ef7f7d58a77f37d5e22b2e705ec988965374e. Nov 6 23:05:42.298881 systemd[1]: Started cri-containerd-4ee6b3eaf320d6e4a2a08ac801f73c357cca393c5af554f3e43c54e3f9c123fe.scope - libcontainer container 4ee6b3eaf320d6e4a2a08ac801f73c357cca393c5af554f3e43c54e3f9c123fe. Nov 6 23:05:42.300847 systemd[1]: Started cri-containerd-7b903336a7e619ec3e5398eed3e0a88fd8f58ecb36c2c0639ecb178162b9d502.scope - libcontainer container 7b903336a7e619ec3e5398eed3e0a88fd8f58ecb36c2c0639ecb178162b9d502. Nov 6 23:05:42.308126 kubelet[2215]: E1106 23:05:42.308058 2215 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.7:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.7:6443: connect: connection refused" interval="1.6s" Nov 6 23:05:42.324552 containerd[1481]: time="2025-11-06T23:05:42.324509304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4654b122dbb389158fe3c0766e603624,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ad8235448e936d1a33e7425ac9ef7f7d58a77f37d5e22b2e705ec988965374e\"" Nov 6 23:05:42.327081 kubelet[2215]: E1106 23:05:42.326602 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:42.330454 containerd[1481]: time="2025-11-06T23:05:42.330417134Z" level=info msg="CreateContainer within sandbox \"4ad8235448e936d1a33e7425ac9ef7f7d58a77f37d5e22b2e705ec988965374e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 6 23:05:42.336167 containerd[1481]: time="2025-11-06T23:05:42.336135462Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a8b9ad4d83b93b8c21157880c3cca405,Namespace:kube-system,Attempt:0,} returns sandbox id \"4ee6b3eaf320d6e4a2a08ac801f73c357cca393c5af554f3e43c54e3f9c123fe\"" Nov 6 23:05:42.337277 kubelet[2215]: E1106 23:05:42.337251 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:42.339205 containerd[1481]: time="2025-11-06T23:05:42.339172821Z" level=info msg="CreateContainer within sandbox \"4ee6b3eaf320d6e4a2a08ac801f73c357cca393c5af554f3e43c54e3f9c123fe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 6 23:05:42.342139 containerd[1481]: time="2025-11-06T23:05:42.342107419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:a1d51be1ff02022474f2598f6e43038f,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b903336a7e619ec3e5398eed3e0a88fd8f58ecb36c2c0639ecb178162b9d502\"" Nov 6 23:05:42.343573 kubelet[2215]: E1106 23:05:42.342916 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:42.345301 containerd[1481]: time="2025-11-06T23:05:42.345267478Z" level=info msg="CreateContainer within sandbox \"7b903336a7e619ec3e5398eed3e0a88fd8f58ecb36c2c0639ecb178162b9d502\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 6 23:05:42.349887 containerd[1481]: time="2025-11-06T23:05:42.349851505Z" level=info msg="CreateContainer within sandbox \"4ad8235448e936d1a33e7425ac9ef7f7d58a77f37d5e22b2e705ec988965374e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"51e04913e61c3818711145c312925627568511ccb84de65ea7c9f5434d8fd8ef\"" Nov 6 23:05:42.350565 containerd[1481]: time="2025-11-06T23:05:42.350463484Z" level=info msg="StartContainer for \"51e04913e61c3818711145c312925627568511ccb84de65ea7c9f5434d8fd8ef\"" Nov 6 23:05:42.357934 containerd[1481]: time="2025-11-06T23:05:42.357896367Z" level=info msg="CreateContainer within sandbox \"4ee6b3eaf320d6e4a2a08ac801f73c357cca393c5af554f3e43c54e3f9c123fe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8ed18d21b1165ca4da469ebba426e1c560700a82da52693235d9ed3d0d5ab872\"" Nov 6 23:05:42.358411 containerd[1481]: time="2025-11-06T23:05:42.358384807Z" level=info msg="StartContainer for \"8ed18d21b1165ca4da469ebba426e1c560700a82da52693235d9ed3d0d5ab872\"" Nov 6 23:05:42.364122 containerd[1481]: time="2025-11-06T23:05:42.364018871Z" level=info msg="CreateContainer within sandbox \"7b903336a7e619ec3e5398eed3e0a88fd8f58ecb36c2c0639ecb178162b9d502\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fded7b0c5796afe65d09099774ef0229070363e1a028b53e239dd3866a85b662\"" Nov 6 23:05:42.364623 containerd[1481]: time="2025-11-06T23:05:42.364599645Z" level=info msg="StartContainer for \"fded7b0c5796afe65d09099774ef0229070363e1a028b53e239dd3866a85b662\"" Nov 6 23:05:42.373053 systemd[1]: Started cri-containerd-51e04913e61c3818711145c312925627568511ccb84de65ea7c9f5434d8fd8ef.scope - libcontainer container 51e04913e61c3818711145c312925627568511ccb84de65ea7c9f5434d8fd8ef. Nov 6 23:05:42.383071 systemd[1]: Started cri-containerd-8ed18d21b1165ca4da469ebba426e1c560700a82da52693235d9ed3d0d5ab872.scope - libcontainer container 8ed18d21b1165ca4da469ebba426e1c560700a82da52693235d9ed3d0d5ab872. Nov 6 23:05:42.396954 systemd[1]: Started cri-containerd-fded7b0c5796afe65d09099774ef0229070363e1a028b53e239dd3866a85b662.scope - libcontainer container fded7b0c5796afe65d09099774ef0229070363e1a028b53e239dd3866a85b662. Nov 6 23:05:42.423462 containerd[1481]: time="2025-11-06T23:05:42.423352083Z" level=info msg="StartContainer for \"51e04913e61c3818711145c312925627568511ccb84de65ea7c9f5434d8fd8ef\" returns successfully" Nov 6 23:05:42.437663 containerd[1481]: time="2025-11-06T23:05:42.437606709Z" level=info msg="StartContainer for \"8ed18d21b1165ca4da469ebba426e1c560700a82da52693235d9ed3d0d5ab872\" returns successfully" Nov 6 23:05:42.437762 containerd[1481]: time="2025-11-06T23:05:42.437621931Z" level=info msg="StartContainer for \"fded7b0c5796afe65d09099774ef0229070363e1a028b53e239dd3866a85b662\" returns successfully" Nov 6 23:05:42.563826 kubelet[2215]: I1106 23:05:42.563783 2215 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:05:42.929884 kubelet[2215]: E1106 23:05:42.929588 2215 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:05:42.929884 kubelet[2215]: E1106 23:05:42.929699 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:42.932736 kubelet[2215]: E1106 23:05:42.932436 2215 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:05:42.932736 kubelet[2215]: E1106 23:05:42.932531 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:42.933712 kubelet[2215]: E1106 23:05:42.933694 2215 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:05:42.934059 kubelet[2215]: E1106 23:05:42.934046 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:43.936294 kubelet[2215]: E1106 23:05:43.935899 2215 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:05:43.937444 kubelet[2215]: E1106 23:05:43.937103 2215 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:05:43.937444 kubelet[2215]: E1106 23:05:43.937196 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:43.937684 kubelet[2215]: E1106 23:05:43.937520 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:44.468069 kubelet[2215]: E1106 23:05:44.468031 2215 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Nov 6 23:05:44.647405 kubelet[2215]: I1106 23:05:44.647223 2215 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 23:05:44.647405 kubelet[2215]: E1106 23:05:44.647262 2215 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Nov 6 23:05:44.659933 kubelet[2215]: E1106 23:05:44.659895 2215 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:05:44.760440 kubelet[2215]: E1106 23:05:44.760269 2215 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:05:44.860940 kubelet[2215]: E1106 23:05:44.860876 2215 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:05:44.939786 kubelet[2215]: E1106 23:05:44.939740 2215 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Nov 6 23:05:44.940122 kubelet[2215]: E1106 23:05:44.939905 2215 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:44.961960 kubelet[2215]: E1106 23:05:44.961925 2215 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:05:45.063084 kubelet[2215]: E1106 23:05:45.062969 2215 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:05:45.107457 kubelet[2215]: I1106 23:05:45.107412 2215 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 23:05:45.113917 kubelet[2215]: E1106 23:05:45.113863 2215 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Nov 6 23:05:45.113917 kubelet[2215]: I1106 23:05:45.113910 2215 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 23:05:45.115507 kubelet[2215]: E1106 23:05:45.115477 2215 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Nov 6 23:05:45.115507 kubelet[2215]: I1106 23:05:45.115502 2215 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 23:05:45.116908 kubelet[2215]: E1106 23:05:45.116883 2215 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Nov 6 23:05:45.894507 kubelet[2215]: I1106 23:05:45.894269 2215 apiserver.go:52] "Watching apiserver" Nov 6 23:05:45.905752 kubelet[2215]: I1106 23:05:45.905698 2215 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 23:05:46.691960 systemd[1]: Reload requested from client PID 2499 ('systemctl') (unit session-7.scope)... Nov 6 23:05:46.691974 systemd[1]: Reloading... Nov 6 23:05:46.760853 zram_generator::config[2543]: No configuration found. Nov 6 23:05:46.843110 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 6 23:05:46.928111 systemd[1]: Reloading finished in 235 ms. Nov 6 23:05:46.951468 kubelet[2215]: I1106 23:05:46.950936 2215 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:05:46.951242 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:05:46.968184 systemd[1]: kubelet.service: Deactivated successfully. Nov 6 23:05:46.968406 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:05:46.968456 systemd[1]: kubelet.service: Consumed 1.481s CPU time, 130.1M memory peak. Nov 6 23:05:46.978075 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 6 23:05:47.088279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 6 23:05:47.104145 (kubelet)[2585]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 6 23:05:47.148789 kubelet[2585]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:05:47.148789 kubelet[2585]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Nov 6 23:05:47.148789 kubelet[2585]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 6 23:05:47.149105 kubelet[2585]: I1106 23:05:47.148839 2585 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 6 23:05:47.154709 kubelet[2585]: I1106 23:05:47.154598 2585 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Nov 6 23:05:47.154709 kubelet[2585]: I1106 23:05:47.154626 2585 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 6 23:05:47.154871 kubelet[2585]: I1106 23:05:47.154856 2585 server.go:954] "Client rotation is on, will bootstrap in background" Nov 6 23:05:47.156933 kubelet[2585]: I1106 23:05:47.156555 2585 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 6 23:05:47.160494 kubelet[2585]: I1106 23:05:47.160456 2585 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 6 23:05:47.163581 kubelet[2585]: E1106 23:05:47.163547 2585 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Nov 6 23:05:47.163581 kubelet[2585]: I1106 23:05:47.163581 2585 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Nov 6 23:05:47.166274 kubelet[2585]: I1106 23:05:47.166229 2585 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 6 23:05:47.166474 kubelet[2585]: I1106 23:05:47.166446 2585 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 6 23:05:47.166625 kubelet[2585]: I1106 23:05:47.166473 2585 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Nov 6 23:05:47.166694 kubelet[2585]: I1106 23:05:47.166631 2585 topology_manager.go:138] "Creating topology manager with none policy" Nov 6 23:05:47.166694 kubelet[2585]: I1106 23:05:47.166640 2585 container_manager_linux.go:304] "Creating device plugin manager" Nov 6 23:05:47.166694 kubelet[2585]: I1106 23:05:47.166683 2585 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:05:47.166856 kubelet[2585]: I1106 23:05:47.166843 2585 kubelet.go:446] "Attempting to sync node with API server" Nov 6 23:05:47.166902 kubelet[2585]: I1106 23:05:47.166859 2585 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 6 23:05:47.166902 kubelet[2585]: I1106 23:05:47.166876 2585 kubelet.go:352] "Adding apiserver pod source" Nov 6 23:05:47.166902 kubelet[2585]: I1106 23:05:47.166885 2585 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 6 23:05:47.167973 kubelet[2585]: I1106 23:05:47.167350 2585 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Nov 6 23:05:47.167973 kubelet[2585]: I1106 23:05:47.167748 2585 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 6 23:05:47.170773 kubelet[2585]: I1106 23:05:47.168235 2585 watchdog_linux.go:99] "Systemd watchdog is not enabled" Nov 6 23:05:47.170773 kubelet[2585]: I1106 23:05:47.168275 2585 server.go:1287] "Started kubelet" Nov 6 23:05:47.170773 kubelet[2585]: I1106 23:05:47.169753 2585 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 6 23:05:47.170997 kubelet[2585]: I1106 23:05:47.170959 2585 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Nov 6 23:05:47.171419 kubelet[2585]: I1106 23:05:47.171399 2585 volume_manager.go:297] "Starting Kubelet Volume Manager" Nov 6 23:05:47.171746 kubelet[2585]: I1106 23:05:47.171566 2585 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Nov 6 23:05:47.173736 kubelet[2585]: I1106 23:05:47.171944 2585 reconciler.go:26] "Reconciler: start to sync state" Nov 6 23:05:47.173856 kubelet[2585]: I1106 23:05:47.172286 2585 server.go:479] "Adding debug handlers to kubelet server" Nov 6 23:05:47.176793 kubelet[2585]: I1106 23:05:47.172325 2585 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 6 23:05:47.176793 kubelet[2585]: I1106 23:05:47.175016 2585 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 6 23:05:47.176793 kubelet[2585]: I1106 23:05:47.173284 2585 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 6 23:05:47.176793 kubelet[2585]: E1106 23:05:47.173703 2585 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 6 23:05:47.176793 kubelet[2585]: I1106 23:05:47.172523 2585 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Nov 6 23:05:47.177012 kubelet[2585]: E1106 23:05:47.176991 2585 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Nov 6 23:05:47.184752 kubelet[2585]: I1106 23:05:47.182141 2585 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 6 23:05:47.184752 kubelet[2585]: I1106 23:05:47.183035 2585 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 6 23:05:47.184752 kubelet[2585]: I1106 23:05:47.183051 2585 status_manager.go:227] "Starting to sync pod status with apiserver" Nov 6 23:05:47.184752 kubelet[2585]: I1106 23:05:47.183066 2585 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Nov 6 23:05:47.184752 kubelet[2585]: I1106 23:05:47.183072 2585 kubelet.go:2382] "Starting kubelet main sync loop" Nov 6 23:05:47.184752 kubelet[2585]: E1106 23:05:47.183115 2585 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 6 23:05:47.193975 kubelet[2585]: I1106 23:05:47.193912 2585 factory.go:221] Registration of the containerd container factory successfully Nov 6 23:05:47.194053 kubelet[2585]: I1106 23:05:47.193983 2585 factory.go:221] Registration of the systemd container factory successfully Nov 6 23:05:47.228048 kubelet[2585]: I1106 23:05:47.227947 2585 cpu_manager.go:221] "Starting CPU manager" policy="none" Nov 6 23:05:47.228048 kubelet[2585]: I1106 23:05:47.227968 2585 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Nov 6 23:05:47.228048 kubelet[2585]: I1106 23:05:47.227990 2585 state_mem.go:36] "Initialized new in-memory state store" Nov 6 23:05:47.228197 kubelet[2585]: I1106 23:05:47.228159 2585 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 6 23:05:47.228197 kubelet[2585]: I1106 23:05:47.228173 2585 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 6 23:05:47.228197 kubelet[2585]: I1106 23:05:47.228190 2585 policy_none.go:49] "None policy: Start" Nov 6 23:05:47.228197 kubelet[2585]: I1106 23:05:47.228199 2585 memory_manager.go:186] "Starting memorymanager" policy="None" Nov 6 23:05:47.228273 kubelet[2585]: I1106 23:05:47.228208 2585 state_mem.go:35] "Initializing new in-memory state store" Nov 6 23:05:47.228322 kubelet[2585]: I1106 23:05:47.228305 2585 state_mem.go:75] "Updated machine memory state" Nov 6 23:05:47.235065 kubelet[2585]: I1106 23:05:47.235037 2585 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 6 23:05:47.235238 kubelet[2585]: I1106 23:05:47.235221 2585 eviction_manager.go:189] "Eviction manager: starting control loop" Nov 6 23:05:47.235267 kubelet[2585]: I1106 23:05:47.235239 2585 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 6 23:05:47.235442 kubelet[2585]: I1106 23:05:47.235426 2585 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 6 23:05:47.236278 kubelet[2585]: E1106 23:05:47.236179 2585 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Nov 6 23:05:47.284263 kubelet[2585]: I1106 23:05:47.284209 2585 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 23:05:47.284263 kubelet[2585]: I1106 23:05:47.284243 2585 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 23:05:47.284263 kubelet[2585]: I1106 23:05:47.284256 2585 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Nov 6 23:05:47.339405 kubelet[2585]: I1106 23:05:47.339371 2585 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Nov 6 23:05:47.346378 kubelet[2585]: I1106 23:05:47.346348 2585 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Nov 6 23:05:47.346495 kubelet[2585]: I1106 23:05:47.346425 2585 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Nov 6 23:05:47.374899 kubelet[2585]: I1106 23:05:47.374871 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a8b9ad4d83b93b8c21157880c3cca405-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8b9ad4d83b93b8c21157880c3cca405\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:05:47.374990 kubelet[2585]: I1106 23:05:47.374905 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:05:47.374990 kubelet[2585]: I1106 23:05:47.374925 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:05:47.374990 kubelet[2585]: I1106 23:05:47.374957 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:05:47.374990 kubelet[2585]: I1106 23:05:47.374976 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a8b9ad4d83b93b8c21157880c3cca405-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a8b9ad4d83b93b8c21157880c3cca405\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:05:47.375099 kubelet[2585]: I1106 23:05:47.374999 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a8b9ad4d83b93b8c21157880c3cca405-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a8b9ad4d83b93b8c21157880c3cca405\") " pod="kube-system/kube-apiserver-localhost" Nov 6 23:05:47.375099 kubelet[2585]: I1106 23:05:47.375016 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:05:47.375099 kubelet[2585]: I1106 23:05:47.375037 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4654b122dbb389158fe3c0766e603624-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4654b122dbb389158fe3c0766e603624\") " pod="kube-system/kube-controller-manager-localhost" Nov 6 23:05:47.375099 kubelet[2585]: I1106 23:05:47.375053 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a1d51be1ff02022474f2598f6e43038f-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"a1d51be1ff02022474f2598f6e43038f\") " pod="kube-system/kube-scheduler-localhost" Nov 6 23:05:47.589271 kubelet[2585]: E1106 23:05:47.589067 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:47.589271 kubelet[2585]: E1106 23:05:47.589067 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:47.589271 kubelet[2585]: E1106 23:05:47.589144 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:47.690899 sudo[2619]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Nov 6 23:05:47.691175 sudo[2619]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Nov 6 23:05:48.129739 sudo[2619]: pam_unix(sudo:session): session closed for user root Nov 6 23:05:48.167394 kubelet[2585]: I1106 23:05:48.167350 2585 apiserver.go:52] "Watching apiserver" Nov 6 23:05:48.172773 kubelet[2585]: I1106 23:05:48.172734 2585 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Nov 6 23:05:48.210966 kubelet[2585]: I1106 23:05:48.209736 2585 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Nov 6 23:05:48.210966 kubelet[2585]: I1106 23:05:48.209741 2585 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Nov 6 23:05:48.212891 kubelet[2585]: E1106 23:05:48.209917 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:48.214686 kubelet[2585]: E1106 23:05:48.214619 2585 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Nov 6 23:05:48.215064 kubelet[2585]: E1106 23:05:48.215032 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:48.215398 kubelet[2585]: E1106 23:05:48.215354 2585 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Nov 6 23:05:48.215645 kubelet[2585]: E1106 23:05:48.215474 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:48.243032 kubelet[2585]: I1106 23:05:48.242467 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.242448835 podStartE2EDuration="1.242448835s" podCreationTimestamp="2025-11-06 23:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:05:48.234579395 +0000 UTC m=+1.126461571" watchObservedRunningTime="2025-11-06 23:05:48.242448835 +0000 UTC m=+1.134330971" Nov 6 23:05:48.255791 kubelet[2585]: I1106 23:05:48.253626 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.253610235 podStartE2EDuration="1.253610235s" podCreationTimestamp="2025-11-06 23:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:05:48.243682476 +0000 UTC m=+1.135564692" watchObservedRunningTime="2025-11-06 23:05:48.253610235 +0000 UTC m=+1.145492411" Nov 6 23:05:48.266510 kubelet[2585]: I1106 23:05:48.266197 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.266181379 podStartE2EDuration="1.266181379s" podCreationTimestamp="2025-11-06 23:05:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:05:48.255968241 +0000 UTC m=+1.147850417" watchObservedRunningTime="2025-11-06 23:05:48.266181379 +0000 UTC m=+1.158063515" Nov 6 23:05:49.211388 kubelet[2585]: E1106 23:05:49.211344 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:49.211724 kubelet[2585]: E1106 23:05:49.211432 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:49.734625 sudo[1668]: pam_unix(sudo:session): session closed for user root Nov 6 23:05:49.736888 sshd[1667]: Connection closed by 10.0.0.1 port 47978 Nov 6 23:05:49.738385 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Nov 6 23:05:49.742110 systemd[1]: sshd@6-10.0.0.7:22-10.0.0.1:47978.service: Deactivated successfully. Nov 6 23:05:49.743942 systemd[1]: session-7.scope: Deactivated successfully. Nov 6 23:05:49.744182 systemd[1]: session-7.scope: Consumed 6.690s CPU time, 257.3M memory peak. Nov 6 23:05:49.745126 systemd-logind[1468]: Session 7 logged out. Waiting for processes to exit. Nov 6 23:05:49.746089 systemd-logind[1468]: Removed session 7. Nov 6 23:05:51.859570 kubelet[2585]: I1106 23:05:51.859533 2585 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 6 23:05:51.860275 kubelet[2585]: I1106 23:05:51.860047 2585 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 6 23:05:51.860303 containerd[1481]: time="2025-11-06T23:05:51.859875268Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 6 23:05:52.338286 kubelet[2585]: E1106 23:05:52.338217 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:52.356816 systemd[1]: Created slice kubepods-besteffort-pod6b747641_3918_42dd_94f7_2b4906a426e1.slice - libcontainer container kubepods-besteffort-pod6b747641_3918_42dd_94f7_2b4906a426e1.slice. Nov 6 23:05:52.382374 systemd[1]: Created slice kubepods-burstable-pode3647c86_cc77_468f_8b6a_a2c2b794bf85.slice - libcontainer container kubepods-burstable-pode3647c86_cc77_468f_8b6a_a2c2b794bf85.slice. Nov 6 23:05:52.409845 kubelet[2585]: I1106 23:05:52.409795 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-lib-modules\") pod \"cilium-5vqsg\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " pod="kube-system/cilium-5vqsg" Nov 6 23:05:52.409845 kubelet[2585]: I1106 23:05:52.409842 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ftd9s\" (UniqueName: \"kubernetes.io/projected/6b747641-3918-42dd-94f7-2b4906a426e1-kube-api-access-ftd9s\") pod \"kube-proxy-mwm7w\" (UID: \"6b747641-3918-42dd-94f7-2b4906a426e1\") " pod="kube-system/kube-proxy-mwm7w" Nov 6 23:05:52.410003 kubelet[2585]: I1106 23:05:52.409864 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6b747641-3918-42dd-94f7-2b4906a426e1-lib-modules\") pod \"kube-proxy-mwm7w\" (UID: \"6b747641-3918-42dd-94f7-2b4906a426e1\") " pod="kube-system/kube-proxy-mwm7w" Nov 6 23:05:52.410003 kubelet[2585]: I1106 23:05:52.409887 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3647c86-cc77-468f-8b6a-a2c2b794bf85-clustermesh-secrets\") pod \"cilium-5vqsg\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " pod="kube-system/cilium-5vqsg" Nov 6 23:05:52.410003 kubelet[2585]: I1106 23:05:52.409928 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cni-path\") pod \"cilium-5vqsg\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " pod="kube-system/cilium-5vqsg" Nov 6 23:05:52.410003 kubelet[2585]: I1106 23:05:52.409950 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cilium-config-path\") pod \"cilium-5vqsg\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " pod="kube-system/cilium-5vqsg" Nov 6 23:05:52.410003 kubelet[2585]: I1106 23:05:52.409967 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6b747641-3918-42dd-94f7-2b4906a426e1-kube-proxy\") pod \"kube-proxy-mwm7w\" (UID: \"6b747641-3918-42dd-94f7-2b4906a426e1\") " pod="kube-system/kube-proxy-mwm7w" Nov 6 23:05:52.410003 kubelet[2585]: I1106 23:05:52.409981 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-etc-cni-netd\") pod \"cilium-5vqsg\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " pod="kube-system/cilium-5vqsg" Nov 6 23:05:52.410126 kubelet[2585]: I1106 23:05:52.409996 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-xtables-lock\") pod \"cilium-5vqsg\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " pod="kube-system/cilium-5vqsg" Nov 6 23:05:52.410126 kubelet[2585]: I1106 23:05:52.410013 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cilium-run\") pod \"cilium-5vqsg\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " pod="kube-system/cilium-5vqsg" Nov 6 23:05:52.410126 kubelet[2585]: I1106 23:05:52.410031 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3647c86-cc77-468f-8b6a-a2c2b794bf85-hubble-tls\") pod \"cilium-5vqsg\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " pod="kube-system/cilium-5vqsg" Nov 6 23:05:52.410126 kubelet[2585]: I1106 23:05:52.410049 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7d5f\" (UniqueName: \"kubernetes.io/projected/e3647c86-cc77-468f-8b6a-a2c2b794bf85-kube-api-access-p7d5f\") pod \"cilium-5vqsg\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " pod="kube-system/cilium-5vqsg" Nov 6 23:05:52.410126 kubelet[2585]: I1106 23:05:52.410064 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6b747641-3918-42dd-94f7-2b4906a426e1-xtables-lock\") pod \"kube-proxy-mwm7w\" (UID: \"6b747641-3918-42dd-94f7-2b4906a426e1\") " pod="kube-system/kube-proxy-mwm7w" Nov 6 23:05:52.410126 kubelet[2585]: I1106 23:05:52.410081 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-hostproc\") pod \"cilium-5vqsg\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " pod="kube-system/cilium-5vqsg" Nov 6 23:05:52.410255 kubelet[2585]: I1106 23:05:52.410094 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cilium-cgroup\") pod \"cilium-5vqsg\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " pod="kube-system/cilium-5vqsg" Nov 6 23:05:52.410255 kubelet[2585]: I1106 23:05:52.410108 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-host-proc-sys-net\") pod \"cilium-5vqsg\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " pod="kube-system/cilium-5vqsg" Nov 6 23:05:52.410255 kubelet[2585]: I1106 23:05:52.410124 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-host-proc-sys-kernel\") pod \"cilium-5vqsg\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " pod="kube-system/cilium-5vqsg" Nov 6 23:05:52.410255 kubelet[2585]: I1106 23:05:52.410165 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-bpf-maps\") pod \"cilium-5vqsg\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " pod="kube-system/cilium-5vqsg" Nov 6 23:05:52.528736 kubelet[2585]: E1106 23:05:52.528702 2585 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 6 23:05:52.528736 kubelet[2585]: E1106 23:05:52.528736 2585 projected.go:194] Error preparing data for projected volume kube-api-access-p7d5f for pod kube-system/cilium-5vqsg: configmap "kube-root-ca.crt" not found Nov 6 23:05:52.528888 kubelet[2585]: E1106 23:05:52.528816 2585 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e3647c86-cc77-468f-8b6a-a2c2b794bf85-kube-api-access-p7d5f podName:e3647c86-cc77-468f-8b6a-a2c2b794bf85 nodeName:}" failed. No retries permitted until 2025-11-06 23:05:53.028795045 +0000 UTC m=+5.920677220 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-p7d5f" (UniqueName: "kubernetes.io/projected/e3647c86-cc77-468f-8b6a-a2c2b794bf85-kube-api-access-p7d5f") pod "cilium-5vqsg" (UID: "e3647c86-cc77-468f-8b6a-a2c2b794bf85") : configmap "kube-root-ca.crt" not found Nov 6 23:05:52.529373 kubelet[2585]: E1106 23:05:52.529353 2585 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 6 23:05:52.529373 kubelet[2585]: E1106 23:05:52.529375 2585 projected.go:194] Error preparing data for projected volume kube-api-access-ftd9s for pod kube-system/kube-proxy-mwm7w: configmap "kube-root-ca.crt" not found Nov 6 23:05:52.529458 kubelet[2585]: E1106 23:05:52.529414 2585 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6b747641-3918-42dd-94f7-2b4906a426e1-kube-api-access-ftd9s podName:6b747641-3918-42dd-94f7-2b4906a426e1 nodeName:}" failed. No retries permitted until 2025-11-06 23:05:53.029400721 +0000 UTC m=+5.921282897 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-ftd9s" (UniqueName: "kubernetes.io/projected/6b747641-3918-42dd-94f7-2b4906a426e1-kube-api-access-ftd9s") pod "kube-proxy-mwm7w" (UID: "6b747641-3918-42dd-94f7-2b4906a426e1") : configmap "kube-root-ca.crt" not found Nov 6 23:05:52.798801 systemd[1]: Created slice kubepods-besteffort-pod2411e4ec_6a37_41db_b6af_b79746a6273c.slice - libcontainer container kubepods-besteffort-pod2411e4ec_6a37_41db_b6af_b79746a6273c.slice. Nov 6 23:05:52.815013 kubelet[2585]: I1106 23:05:52.814974 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2411e4ec-6a37-41db-b6af-b79746a6273c-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-btbw4\" (UID: \"2411e4ec-6a37-41db-b6af-b79746a6273c\") " pod="kube-system/cilium-operator-6c4d7847fc-btbw4" Nov 6 23:05:52.815013 kubelet[2585]: I1106 23:05:52.815017 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4v9nl\" (UniqueName: \"kubernetes.io/projected/2411e4ec-6a37-41db-b6af-b79746a6273c-kube-api-access-4v9nl\") pod \"cilium-operator-6c4d7847fc-btbw4\" (UID: \"2411e4ec-6a37-41db-b6af-b79746a6273c\") " pod="kube-system/cilium-operator-6c4d7847fc-btbw4" Nov 6 23:05:53.103124 kubelet[2585]: E1106 23:05:53.102993 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:53.103983 containerd[1481]: time="2025-11-06T23:05:53.103696481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-btbw4,Uid:2411e4ec-6a37-41db-b6af-b79746a6273c,Namespace:kube-system,Attempt:0,}" Nov 6 23:05:53.216225 kubelet[2585]: E1106 23:05:53.216177 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:53.269515 kubelet[2585]: E1106 23:05:53.269454 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:53.270269 containerd[1481]: time="2025-11-06T23:05:53.269948869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mwm7w,Uid:6b747641-3918-42dd-94f7-2b4906a426e1,Namespace:kube-system,Attempt:0,}" Nov 6 23:05:53.285512 kubelet[2585]: E1106 23:05:53.285474 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:53.286217 containerd[1481]: time="2025-11-06T23:05:53.286012783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5vqsg,Uid:e3647c86-cc77-468f-8b6a-a2c2b794bf85,Namespace:kube-system,Attempt:0,}" Nov 6 23:05:53.315207 containerd[1481]: time="2025-11-06T23:05:53.315125591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:05:53.315207 containerd[1481]: time="2025-11-06T23:05:53.315186676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:05:53.315207 containerd[1481]: time="2025-11-06T23:05:53.315197270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:05:53.315423 containerd[1481]: time="2025-11-06T23:05:53.315269989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:05:53.335164 systemd[1]: Started cri-containerd-765e50336204eb586c43f2ae6880425fc2b490f1d452c1a51c57f18457f7321d.scope - libcontainer container 765e50336204eb586c43f2ae6880425fc2b490f1d452c1a51c57f18457f7321d. Nov 6 23:05:53.343491 containerd[1481]: time="2025-11-06T23:05:53.343367688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:05:53.343491 containerd[1481]: time="2025-11-06T23:05:53.343439888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:05:53.343491 containerd[1481]: time="2025-11-06T23:05:53.343455319Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:05:53.344690 containerd[1481]: time="2025-11-06T23:05:53.344492135Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:05:53.351081 containerd[1481]: time="2025-11-06T23:05:53.350332127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:05:53.351081 containerd[1481]: time="2025-11-06T23:05:53.350380939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:05:53.351081 containerd[1481]: time="2025-11-06T23:05:53.350392733Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:05:53.351081 containerd[1481]: time="2025-11-06T23:05:53.350459375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:05:53.368964 systemd[1]: Started cri-containerd-222f26bbc4a946bdf136bbdb7758346e559666197654c4de457203f42093673a.scope - libcontainer container 222f26bbc4a946bdf136bbdb7758346e559666197654c4de457203f42093673a. Nov 6 23:05:53.372902 systemd[1]: Started cri-containerd-a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d.scope - libcontainer container a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d. Nov 6 23:05:53.383799 containerd[1481]: time="2025-11-06T23:05:53.383729841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-btbw4,Uid:2411e4ec-6a37-41db-b6af-b79746a6273c,Namespace:kube-system,Attempt:0,} returns sandbox id \"765e50336204eb586c43f2ae6880425fc2b490f1d452c1a51c57f18457f7321d\"" Nov 6 23:05:53.384498 kubelet[2585]: E1106 23:05:53.384466 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:53.386393 containerd[1481]: time="2025-11-06T23:05:53.386324420Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Nov 6 23:05:53.402152 containerd[1481]: time="2025-11-06T23:05:53.402109292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mwm7w,Uid:6b747641-3918-42dd-94f7-2b4906a426e1,Namespace:kube-system,Attempt:0,} returns sandbox id \"222f26bbc4a946bdf136bbdb7758346e559666197654c4de457203f42093673a\"" Nov 6 23:05:53.402859 kubelet[2585]: E1106 23:05:53.402826 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:53.406505 containerd[1481]: time="2025-11-06T23:05:53.406476873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5vqsg,Uid:e3647c86-cc77-468f-8b6a-a2c2b794bf85,Namespace:kube-system,Attempt:0,} returns sandbox id \"a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d\"" Nov 6 23:05:53.407798 containerd[1481]: time="2025-11-06T23:05:53.407727969Z" level=info msg="CreateContainer within sandbox \"222f26bbc4a946bdf136bbdb7758346e559666197654c4de457203f42093673a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 6 23:05:53.409074 kubelet[2585]: E1106 23:05:53.409052 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:53.460479 containerd[1481]: time="2025-11-06T23:05:53.460434291Z" level=info msg="CreateContainer within sandbox \"222f26bbc4a946bdf136bbdb7758346e559666197654c4de457203f42093673a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"cf756f38a42929578867bf44b2400783e3d4d8cee2de56a0c9b105f43fe75e6a\"" Nov 6 23:05:53.461213 containerd[1481]: time="2025-11-06T23:05:53.461185428Z" level=info msg="StartContainer for \"cf756f38a42929578867bf44b2400783e3d4d8cee2de56a0c9b105f43fe75e6a\"" Nov 6 23:05:53.492023 systemd[1]: Started cri-containerd-cf756f38a42929578867bf44b2400783e3d4d8cee2de56a0c9b105f43fe75e6a.scope - libcontainer container cf756f38a42929578867bf44b2400783e3d4d8cee2de56a0c9b105f43fe75e6a. Nov 6 23:05:53.527574 containerd[1481]: time="2025-11-06T23:05:53.527527193Z" level=info msg="StartContainer for \"cf756f38a42929578867bf44b2400783e3d4d8cee2de56a0c9b105f43fe75e6a\" returns successfully" Nov 6 23:05:54.220214 kubelet[2585]: E1106 23:05:54.220185 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:54.222178 kubelet[2585]: E1106 23:05:54.222131 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:54.230515 kubelet[2585]: I1106 23:05:54.230443 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mwm7w" podStartSLOduration=2.230427675 podStartE2EDuration="2.230427675s" podCreationTimestamp="2025-11-06 23:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:05:54.230213189 +0000 UTC m=+7.122095365" watchObservedRunningTime="2025-11-06 23:05:54.230427675 +0000 UTC m=+7.122309811" Nov 6 23:05:54.438720 kubelet[2585]: E1106 23:05:54.438688 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:54.722272 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2293867803.mount: Deactivated successfully. Nov 6 23:05:55.117189 containerd[1481]: time="2025-11-06T23:05:55.117074285Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:55.118410 containerd[1481]: time="2025-11-06T23:05:55.118345976Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Nov 6 23:05:55.119314 containerd[1481]: time="2025-11-06T23:05:55.119286271Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:05:55.120712 containerd[1481]: time="2025-11-06T23:05:55.120680181Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.734280083s" Nov 6 23:05:55.120712 containerd[1481]: time="2025-11-06T23:05:55.120710726Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Nov 6 23:05:55.125925 containerd[1481]: time="2025-11-06T23:05:55.125665514Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Nov 6 23:05:55.126134 containerd[1481]: time="2025-11-06T23:05:55.126103217Z" level=info msg="CreateContainer within sandbox \"765e50336204eb586c43f2ae6880425fc2b490f1d452c1a51c57f18457f7321d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Nov 6 23:05:55.148758 containerd[1481]: time="2025-11-06T23:05:55.148713748Z" level=info msg="CreateContainer within sandbox \"765e50336204eb586c43f2ae6880425fc2b490f1d452c1a51c57f18457f7321d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579\"" Nov 6 23:05:55.149596 containerd[1481]: time="2025-11-06T23:05:55.149569965Z" level=info msg="StartContainer for \"def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579\"" Nov 6 23:05:55.177941 systemd[1]: Started cri-containerd-def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579.scope - libcontainer container def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579. Nov 6 23:05:55.201263 containerd[1481]: time="2025-11-06T23:05:55.201221085Z" level=info msg="StartContainer for \"def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579\" returns successfully" Nov 6 23:05:55.224804 kubelet[2585]: E1106 23:05:55.224749 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:55.234586 kubelet[2585]: E1106 23:05:55.233518 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:55.242978 kubelet[2585]: I1106 23:05:55.242919 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-btbw4" podStartSLOduration=1.503831716 podStartE2EDuration="3.242903498s" podCreationTimestamp="2025-11-06 23:05:52 +0000 UTC" firstStartedPulling="2025-11-06 23:05:53.38566787 +0000 UTC m=+6.277550046" lastFinishedPulling="2025-11-06 23:05:55.124739652 +0000 UTC m=+8.016621828" observedRunningTime="2025-11-06 23:05:55.233819913 +0000 UTC m=+8.125702169" watchObservedRunningTime="2025-11-06 23:05:55.242903498 +0000 UTC m=+8.134785674" Nov 6 23:05:56.231563 kubelet[2585]: E1106 23:05:56.231522 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:57.139586 kubelet[2585]: E1106 23:05:57.139490 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:05:57.232453 kubelet[2585]: E1106 23:05:57.232352 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:02.121065 update_engine[1474]: I20251106 23:06:02.120995 1474 update_attempter.cc:509] Updating boot flags... Nov 6 23:06:02.228429 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3028) Nov 6 23:06:02.282792 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3032) Nov 6 23:06:02.502377 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount210931764.mount: Deactivated successfully. Nov 6 23:06:03.975517 containerd[1481]: time="2025-11-06T23:06:03.975469635Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:06:03.976522 containerd[1481]: time="2025-11-06T23:06:03.976316465Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Nov 6 23:06:03.981210 containerd[1481]: time="2025-11-06T23:06:03.981146799Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 6 23:06:03.985380 containerd[1481]: time="2025-11-06T23:06:03.985343040Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 8.859635385s" Nov 6 23:06:03.985470 containerd[1481]: time="2025-11-06T23:06:03.985384028Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Nov 6 23:06:03.988305 containerd[1481]: time="2025-11-06T23:06:03.987990098Z" level=info msg="CreateContainer within sandbox \"a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:06:04.007684 containerd[1481]: time="2025-11-06T23:06:04.007624428Z" level=info msg="CreateContainer within sandbox \"a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd\"" Nov 6 23:06:04.008208 containerd[1481]: time="2025-11-06T23:06:04.008174636Z" level=info msg="StartContainer for \"0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd\"" Nov 6 23:06:04.039950 systemd[1]: Started cri-containerd-0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd.scope - libcontainer container 0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd. Nov 6 23:06:04.063948 containerd[1481]: time="2025-11-06T23:06:04.063893614Z" level=info msg="StartContainer for \"0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd\" returns successfully" Nov 6 23:06:04.074471 systemd[1]: cri-containerd-0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd.scope: Deactivated successfully. Nov 6 23:06:04.159717 containerd[1481]: time="2025-11-06T23:06:04.153938812Z" level=info msg="shim disconnected" id=0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd namespace=k8s.io Nov 6 23:06:04.159717 containerd[1481]: time="2025-11-06T23:06:04.159701897Z" level=warning msg="cleaning up after shim disconnected" id=0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd namespace=k8s.io Nov 6 23:06:04.159717 containerd[1481]: time="2025-11-06T23:06:04.159715333Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:06:04.255857 kubelet[2585]: E1106 23:06:04.255645 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:04.257873 containerd[1481]: time="2025-11-06T23:06:04.257834576Z" level=info msg="CreateContainer within sandbox \"a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:06:04.271530 containerd[1481]: time="2025-11-06T23:06:04.270424772Z" level=info msg="CreateContainer within sandbox \"a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d\"" Nov 6 23:06:04.271530 containerd[1481]: time="2025-11-06T23:06:04.270800788Z" level=info msg="StartContainer for \"aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d\"" Nov 6 23:06:04.294942 systemd[1]: Started cri-containerd-aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d.scope - libcontainer container aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d. Nov 6 23:06:04.314518 containerd[1481]: time="2025-11-06T23:06:04.314479138Z" level=info msg="StartContainer for \"aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d\" returns successfully" Nov 6 23:06:04.324593 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 6 23:06:04.324823 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:06:04.325356 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:06:04.332100 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 6 23:06:04.332268 systemd[1]: cri-containerd-aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d.scope: Deactivated successfully. Nov 6 23:06:04.343005 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 6 23:06:04.353806 containerd[1481]: time="2025-11-06T23:06:04.353737633Z" level=info msg="shim disconnected" id=aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d namespace=k8s.io Nov 6 23:06:04.353806 containerd[1481]: time="2025-11-06T23:06:04.353803894Z" level=warning msg="cleaning up after shim disconnected" id=aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d namespace=k8s.io Nov 6 23:06:04.353806 containerd[1481]: time="2025-11-06T23:06:04.353812012Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:06:05.004496 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd-rootfs.mount: Deactivated successfully. Nov 6 23:06:05.260344 kubelet[2585]: E1106 23:06:05.259968 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:05.262649 containerd[1481]: time="2025-11-06T23:06:05.262388079Z" level=info msg="CreateContainer within sandbox \"a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:06:05.290831 containerd[1481]: time="2025-11-06T23:06:05.290760517Z" level=info msg="CreateContainer within sandbox \"a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c\"" Nov 6 23:06:05.293378 containerd[1481]: time="2025-11-06T23:06:05.291998755Z" level=info msg="StartContainer for \"7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c\"" Nov 6 23:06:05.322989 systemd[1]: Started cri-containerd-7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c.scope - libcontainer container 7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c. Nov 6 23:06:05.347790 containerd[1481]: time="2025-11-06T23:06:05.347691945Z" level=info msg="StartContainer for \"7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c\" returns successfully" Nov 6 23:06:05.350383 systemd[1]: cri-containerd-7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c.scope: Deactivated successfully. Nov 6 23:06:05.372893 containerd[1481]: time="2025-11-06T23:06:05.372821544Z" level=info msg="shim disconnected" id=7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c namespace=k8s.io Nov 6 23:06:05.372893 containerd[1481]: time="2025-11-06T23:06:05.372874490Z" level=warning msg="cleaning up after shim disconnected" id=7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c namespace=k8s.io Nov 6 23:06:05.372893 containerd[1481]: time="2025-11-06T23:06:05.372882928Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:06:06.004343 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c-rootfs.mount: Deactivated successfully. Nov 6 23:06:06.263658 kubelet[2585]: E1106 23:06:06.263049 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:06.266431 containerd[1481]: time="2025-11-06T23:06:06.266401178Z" level=info msg="CreateContainer within sandbox \"a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:06:06.289145 containerd[1481]: time="2025-11-06T23:06:06.289100536Z" level=info msg="CreateContainer within sandbox \"a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6\"" Nov 6 23:06:06.289727 containerd[1481]: time="2025-11-06T23:06:06.289685714Z" level=info msg="StartContainer for \"cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6\"" Nov 6 23:06:06.309912 systemd[1]: Started cri-containerd-cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6.scope - libcontainer container cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6. Nov 6 23:06:06.330507 systemd[1]: cri-containerd-cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6.scope: Deactivated successfully. Nov 6 23:06:06.334122 containerd[1481]: time="2025-11-06T23:06:06.334083634Z" level=info msg="StartContainer for \"cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6\" returns successfully" Nov 6 23:06:06.355706 containerd[1481]: time="2025-11-06T23:06:06.355577446Z" level=info msg="shim disconnected" id=cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6 namespace=k8s.io Nov 6 23:06:06.355706 containerd[1481]: time="2025-11-06T23:06:06.355646869Z" level=warning msg="cleaning up after shim disconnected" id=cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6 namespace=k8s.io Nov 6 23:06:06.355706 containerd[1481]: time="2025-11-06T23:06:06.355658626Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:06:07.004424 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6-rootfs.mount: Deactivated successfully. Nov 6 23:06:07.271039 kubelet[2585]: E1106 23:06:07.270526 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:07.274503 containerd[1481]: time="2025-11-06T23:06:07.274230033Z" level=info msg="CreateContainer within sandbox \"a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:06:07.293787 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3580826068.mount: Deactivated successfully. Nov 6 23:06:07.295906 containerd[1481]: time="2025-11-06T23:06:07.295796195Z" level=info msg="CreateContainer within sandbox \"a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653\"" Nov 6 23:06:07.296273 containerd[1481]: time="2025-11-06T23:06:07.296245053Z" level=info msg="StartContainer for \"6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653\"" Nov 6 23:06:07.326950 systemd[1]: Started cri-containerd-6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653.scope - libcontainer container 6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653. Nov 6 23:06:07.348431 containerd[1481]: time="2025-11-06T23:06:07.348385443Z" level=info msg="StartContainer for \"6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653\" returns successfully" Nov 6 23:06:07.453100 kubelet[2585]: I1106 23:06:07.452940 2585 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Nov 6 23:06:07.489469 systemd[1]: Created slice kubepods-burstable-pod40dbc2bb_73f4_45f3_bcc7_2a38938f61c4.slice - libcontainer container kubepods-burstable-pod40dbc2bb_73f4_45f3_bcc7_2a38938f61c4.slice. Nov 6 23:06:07.500621 systemd[1]: Created slice kubepods-burstable-podcdef3662_dd76_4a05_8c63_548c75528269.slice - libcontainer container kubepods-burstable-podcdef3662_dd76_4a05_8c63_548c75528269.slice. Nov 6 23:06:07.587594 kubelet[2585]: I1106 23:06:07.587482 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40dbc2bb-73f4-45f3-bcc7-2a38938f61c4-config-volume\") pod \"coredns-668d6bf9bc-956jq\" (UID: \"40dbc2bb-73f4-45f3-bcc7-2a38938f61c4\") " pod="kube-system/coredns-668d6bf9bc-956jq" Nov 6 23:06:07.587594 kubelet[2585]: I1106 23:06:07.587539 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krmrl\" (UniqueName: \"kubernetes.io/projected/40dbc2bb-73f4-45f3-bcc7-2a38938f61c4-kube-api-access-krmrl\") pod \"coredns-668d6bf9bc-956jq\" (UID: \"40dbc2bb-73f4-45f3-bcc7-2a38938f61c4\") " pod="kube-system/coredns-668d6bf9bc-956jq" Nov 6 23:06:07.587594 kubelet[2585]: I1106 23:06:07.587562 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njc66\" (UniqueName: \"kubernetes.io/projected/cdef3662-dd76-4a05-8c63-548c75528269-kube-api-access-njc66\") pod \"coredns-668d6bf9bc-hw48r\" (UID: \"cdef3662-dd76-4a05-8c63-548c75528269\") " pod="kube-system/coredns-668d6bf9bc-hw48r" Nov 6 23:06:07.587594 kubelet[2585]: I1106 23:06:07.587593 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cdef3662-dd76-4a05-8c63-548c75528269-config-volume\") pod \"coredns-668d6bf9bc-hw48r\" (UID: \"cdef3662-dd76-4a05-8c63-548c75528269\") " pod="kube-system/coredns-668d6bf9bc-hw48r" Nov 6 23:06:07.795968 kubelet[2585]: E1106 23:06:07.795839 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:07.796822 containerd[1481]: time="2025-11-06T23:06:07.796760113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-956jq,Uid:40dbc2bb-73f4-45f3-bcc7-2a38938f61c4,Namespace:kube-system,Attempt:0,}" Nov 6 23:06:07.803920 kubelet[2585]: E1106 23:06:07.803887 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:07.804519 containerd[1481]: time="2025-11-06T23:06:07.804487631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hw48r,Uid:cdef3662-dd76-4a05-8c63-548c75528269,Namespace:kube-system,Attempt:0,}" Nov 6 23:06:08.273713 kubelet[2585]: E1106 23:06:08.273682 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:08.289752 kubelet[2585]: I1106 23:06:08.289464 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-5vqsg" podStartSLOduration=5.7127144659999995 podStartE2EDuration="16.28944766s" podCreationTimestamp="2025-11-06 23:05:52 +0000 UTC" firstStartedPulling="2025-11-06 23:05:53.409602073 +0000 UTC m=+6.301484249" lastFinishedPulling="2025-11-06 23:06:03.986335267 +0000 UTC m=+16.878217443" observedRunningTime="2025-11-06 23:06:08.28860472 +0000 UTC m=+21.180486896" watchObservedRunningTime="2025-11-06 23:06:08.28944766 +0000 UTC m=+21.181329836" Nov 6 23:06:09.275917 kubelet[2585]: E1106 23:06:09.275764 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:09.355446 systemd-networkd[1401]: cilium_host: Link UP Nov 6 23:06:09.355560 systemd-networkd[1401]: cilium_net: Link UP Nov 6 23:06:09.355563 systemd-networkd[1401]: cilium_net: Gained carrier Nov 6 23:06:09.355677 systemd-networkd[1401]: cilium_host: Gained carrier Nov 6 23:06:09.355823 systemd-networkd[1401]: cilium_host: Gained IPv6LL Nov 6 23:06:09.427222 systemd-networkd[1401]: cilium_vxlan: Link UP Nov 6 23:06:09.427230 systemd-networkd[1401]: cilium_vxlan: Gained carrier Nov 6 23:06:09.674805 kernel: NET: Registered PF_ALG protocol family Nov 6 23:06:09.745930 systemd-networkd[1401]: cilium_net: Gained IPv6LL Nov 6 23:06:10.238202 systemd-networkd[1401]: lxc_health: Link UP Nov 6 23:06:10.238941 systemd-networkd[1401]: lxc_health: Gained carrier Nov 6 23:06:10.277534 kubelet[2585]: E1106 23:06:10.277490 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:10.352512 systemd-networkd[1401]: lxcf3aa5eb9b24f: Link UP Nov 6 23:06:10.361788 kernel: eth0: renamed from tmpe2e57 Nov 6 23:06:10.381540 kernel: eth0: renamed from tmp23395 Nov 6 23:06:10.386317 systemd-networkd[1401]: lxc94a1467cf571: Link UP Nov 6 23:06:10.386512 systemd-networkd[1401]: lxcf3aa5eb9b24f: Gained carrier Nov 6 23:06:10.386625 systemd-networkd[1401]: lxc94a1467cf571: Gained carrier Nov 6 23:06:10.929951 systemd-networkd[1401]: cilium_vxlan: Gained IPv6LL Nov 6 23:06:11.294274 kubelet[2585]: E1106 23:06:11.293521 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:11.314875 systemd-networkd[1401]: lxc_health: Gained IPv6LL Nov 6 23:06:11.827725 systemd-networkd[1401]: lxc94a1467cf571: Gained IPv6LL Nov 6 23:06:12.209985 systemd-networkd[1401]: lxcf3aa5eb9b24f: Gained IPv6LL Nov 6 23:06:12.285052 kubelet[2585]: E1106 23:06:12.285019 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:13.284499 kubelet[2585]: E1106 23:06:13.284150 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:13.947025 systemd[1]: Started sshd@7-10.0.0.7:22-10.0.0.1:41382.service - OpenSSH per-connection server daemon (10.0.0.1:41382). Nov 6 23:06:14.000521 containerd[1481]: time="2025-11-06T23:06:14.000437326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:06:14.000862 containerd[1481]: time="2025-11-06T23:06:14.000550669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:06:14.000862 containerd[1481]: time="2025-11-06T23:06:14.000592942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:06:14.000862 containerd[1481]: time="2025-11-06T23:06:14.000755997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:06:14.001664 sshd[3840]: Accepted publickey for core from 10.0.0.1 port 41382 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:06:14.002707 containerd[1481]: time="2025-11-06T23:06:14.002617087Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:06:14.002707 containerd[1481]: time="2025-11-06T23:06:14.002675558Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:06:14.002707 containerd[1481]: time="2025-11-06T23:06:14.002690876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:06:14.003439 sshd-session[3840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:06:14.005484 containerd[1481]: time="2025-11-06T23:06:14.005384045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:06:14.013785 systemd-logind[1468]: New session 8 of user core. Nov 6 23:06:14.020990 systemd[1]: Started cri-containerd-233957c0cf0feeb8328fe555fa9b6a2daf7ec2c5514256550aaa1944962fa857.scope - libcontainer container 233957c0cf0feeb8328fe555fa9b6a2daf7ec2c5514256550aaa1944962fa857. Nov 6 23:06:14.022236 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 6 23:06:14.030044 systemd[1]: Started cri-containerd-e2e57b23b81ff6cab41a152bb8792f987fd9ca2612360bb256d5df37963b2ba2.scope - libcontainer container e2e57b23b81ff6cab41a152bb8792f987fd9ca2612360bb256d5df37963b2ba2. Nov 6 23:06:14.037960 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 23:06:14.043073 systemd-resolved[1320]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Nov 6 23:06:14.056044 containerd[1481]: time="2025-11-06T23:06:14.056002179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hw48r,Uid:cdef3662-dd76-4a05-8c63-548c75528269,Namespace:kube-system,Attempt:0,} returns sandbox id \"233957c0cf0feeb8328fe555fa9b6a2daf7ec2c5514256550aaa1944962fa857\"" Nov 6 23:06:14.058510 kubelet[2585]: E1106 23:06:14.057997 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:14.060420 containerd[1481]: time="2025-11-06T23:06:14.060388742Z" level=info msg="CreateContainer within sandbox \"233957c0cf0feeb8328fe555fa9b6a2daf7ec2c5514256550aaa1944962fa857\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:06:14.062908 containerd[1481]: time="2025-11-06T23:06:14.062874541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-956jq,Uid:40dbc2bb-73f4-45f3-bcc7-2a38938f61c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2e57b23b81ff6cab41a152bb8792f987fd9ca2612360bb256d5df37963b2ba2\"" Nov 6 23:06:14.063809 kubelet[2585]: E1106 23:06:14.063764 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:14.065688 containerd[1481]: time="2025-11-06T23:06:14.065658737Z" level=info msg="CreateContainer within sandbox \"e2e57b23b81ff6cab41a152bb8792f987fd9ca2612360bb256d5df37963b2ba2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 6 23:06:14.082043 containerd[1481]: time="2025-11-06T23:06:14.081995086Z" level=info msg="CreateContainer within sandbox \"233957c0cf0feeb8328fe555fa9b6a2daf7ec2c5514256550aaa1944962fa857\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1e744da075618c69a00d39b2a7af3f2a1c20f5848f77f92d819c5a865b40fcb3\"" Nov 6 23:06:14.085662 containerd[1481]: time="2025-11-06T23:06:14.085618520Z" level=info msg="StartContainer for \"1e744da075618c69a00d39b2a7af3f2a1c20f5848f77f92d819c5a865b40fcb3\"" Nov 6 23:06:14.089842 containerd[1481]: time="2025-11-06T23:06:14.089757719Z" level=info msg="CreateContainer within sandbox \"e2e57b23b81ff6cab41a152bb8792f987fd9ca2612360bb256d5df37963b2ba2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0d2e105c7f6c935ca9e31fe68f99f4140d3fe48a683fc6d042e8cf87ebbe4818\"" Nov 6 23:06:14.091002 containerd[1481]: time="2025-11-06T23:06:14.090976302Z" level=info msg="StartContainer for \"0d2e105c7f6c935ca9e31fe68f99f4140d3fe48a683fc6d042e8cf87ebbe4818\"" Nov 6 23:06:14.123074 systemd[1]: Started cri-containerd-0d2e105c7f6c935ca9e31fe68f99f4140d3fe48a683fc6d042e8cf87ebbe4818.scope - libcontainer container 0d2e105c7f6c935ca9e31fe68f99f4140d3fe48a683fc6d042e8cf87ebbe4818. Nov 6 23:06:14.124138 systemd[1]: Started cri-containerd-1e744da075618c69a00d39b2a7af3f2a1c20f5848f77f92d819c5a865b40fcb3.scope - libcontainer container 1e744da075618c69a00d39b2a7af3f2a1c20f5848f77f92d819c5a865b40fcb3. Nov 6 23:06:14.170563 containerd[1481]: time="2025-11-06T23:06:14.168723338Z" level=info msg="StartContainer for \"1e744da075618c69a00d39b2a7af3f2a1c20f5848f77f92d819c5a865b40fcb3\" returns successfully" Nov 6 23:06:14.170563 containerd[1481]: time="2025-11-06T23:06:14.168736656Z" level=info msg="StartContainer for \"0d2e105c7f6c935ca9e31fe68f99f4140d3fe48a683fc6d042e8cf87ebbe4818\" returns successfully" Nov 6 23:06:14.188816 sshd[3904]: Connection closed by 10.0.0.1 port 41382 Nov 6 23:06:14.189206 sshd-session[3840]: pam_unix(sshd:session): session closed for user core Nov 6 23:06:14.193568 systemd[1]: session-8.scope: Deactivated successfully. Nov 6 23:06:14.193597 systemd-logind[1468]: Session 8 logged out. Waiting for processes to exit. Nov 6 23:06:14.195485 systemd[1]: sshd@7-10.0.0.7:22-10.0.0.1:41382.service: Deactivated successfully. Nov 6 23:06:14.198459 systemd-logind[1468]: Removed session 8. Nov 6 23:06:14.287799 kubelet[2585]: E1106 23:06:14.287138 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:14.289852 kubelet[2585]: E1106 23:06:14.289762 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:14.311787 kubelet[2585]: I1106 23:06:14.311721 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-956jq" podStartSLOduration=22.311704946 podStartE2EDuration="22.311704946s" podCreationTimestamp="2025-11-06 23:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:06:14.299824471 +0000 UTC m=+27.191706687" watchObservedRunningTime="2025-11-06 23:06:14.311704946 +0000 UTC m=+27.203587122" Nov 6 23:06:15.290989 kubelet[2585]: E1106 23:06:15.290879 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:15.290989 kubelet[2585]: E1106 23:06:15.290934 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:16.293400 kubelet[2585]: E1106 23:06:16.293309 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:16.293400 kubelet[2585]: E1106 23:06:16.293350 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:06:19.219096 systemd[1]: Started sshd@8-10.0.0.7:22-10.0.0.1:41388.service - OpenSSH per-connection server daemon (10.0.0.1:41388). Nov 6 23:06:19.262517 sshd[4025]: Accepted publickey for core from 10.0.0.1 port 41388 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:06:19.264003 sshd-session[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:06:19.268610 systemd-logind[1468]: New session 9 of user core. Nov 6 23:06:19.276946 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 6 23:06:19.393866 sshd[4027]: Connection closed by 10.0.0.1 port 41388 Nov 6 23:06:19.393374 sshd-session[4025]: pam_unix(sshd:session): session closed for user core Nov 6 23:06:19.396986 systemd[1]: sshd@8-10.0.0.7:22-10.0.0.1:41388.service: Deactivated successfully. Nov 6 23:06:19.400511 systemd[1]: session-9.scope: Deactivated successfully. Nov 6 23:06:19.401233 systemd-logind[1468]: Session 9 logged out. Waiting for processes to exit. Nov 6 23:06:19.402117 systemd-logind[1468]: Removed session 9. Nov 6 23:06:24.404107 systemd[1]: Started sshd@9-10.0.0.7:22-10.0.0.1:57852.service - OpenSSH per-connection server daemon (10.0.0.1:57852). Nov 6 23:06:24.447632 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 57852 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:06:24.448952 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:06:24.453498 systemd-logind[1468]: New session 10 of user core. Nov 6 23:06:24.462934 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 6 23:06:24.576547 sshd[4046]: Connection closed by 10.0.0.1 port 57852 Nov 6 23:06:24.576991 sshd-session[4044]: pam_unix(sshd:session): session closed for user core Nov 6 23:06:24.580274 systemd[1]: sshd@9-10.0.0.7:22-10.0.0.1:57852.service: Deactivated successfully. Nov 6 23:06:24.581917 systemd[1]: session-10.scope: Deactivated successfully. Nov 6 23:06:24.582529 systemd-logind[1468]: Session 10 logged out. Waiting for processes to exit. Nov 6 23:06:24.583374 systemd-logind[1468]: Removed session 10. Nov 6 23:06:29.591325 systemd[1]: Started sshd@10-10.0.0.7:22-10.0.0.1:58076.service - OpenSSH per-connection server daemon (10.0.0.1:58076). Nov 6 23:06:29.629858 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 58076 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:06:29.631187 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:06:29.634930 systemd-logind[1468]: New session 11 of user core. Nov 6 23:06:29.644945 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 6 23:06:29.756108 sshd[4062]: Connection closed by 10.0.0.1 port 58076 Nov 6 23:06:29.756618 sshd-session[4060]: pam_unix(sshd:session): session closed for user core Nov 6 23:06:29.768220 systemd[1]: sshd@10-10.0.0.7:22-10.0.0.1:58076.service: Deactivated successfully. Nov 6 23:06:29.770683 systemd[1]: session-11.scope: Deactivated successfully. Nov 6 23:06:29.771419 systemd-logind[1468]: Session 11 logged out. Waiting for processes to exit. Nov 6 23:06:29.783343 systemd[1]: Started sshd@11-10.0.0.7:22-10.0.0.1:58082.service - OpenSSH per-connection server daemon (10.0.0.1:58082). Nov 6 23:06:29.784490 systemd-logind[1468]: Removed session 11. Nov 6 23:06:29.819070 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 58082 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:06:29.820388 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:06:29.824367 systemd-logind[1468]: New session 12 of user core. Nov 6 23:06:29.829939 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 6 23:06:29.982734 sshd[4079]: Connection closed by 10.0.0.1 port 58082 Nov 6 23:06:29.983342 sshd-session[4076]: pam_unix(sshd:session): session closed for user core Nov 6 23:06:29.992103 systemd[1]: sshd@11-10.0.0.7:22-10.0.0.1:58082.service: Deactivated successfully. Nov 6 23:06:29.993702 systemd[1]: session-12.scope: Deactivated successfully. Nov 6 23:06:29.998323 systemd-logind[1468]: Session 12 logged out. Waiting for processes to exit. Nov 6 23:06:30.005111 systemd[1]: Started sshd@12-10.0.0.7:22-10.0.0.1:58098.service - OpenSSH per-connection server daemon (10.0.0.1:58098). Nov 6 23:06:30.008533 systemd-logind[1468]: Removed session 12. Nov 6 23:06:30.050412 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 58098 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:06:30.051600 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:06:30.055516 systemd-logind[1468]: New session 13 of user core. Nov 6 23:06:30.061950 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 6 23:06:30.179474 sshd[4093]: Connection closed by 10.0.0.1 port 58098 Nov 6 23:06:30.178930 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Nov 6 23:06:30.182372 systemd[1]: sshd@12-10.0.0.7:22-10.0.0.1:58098.service: Deactivated successfully. Nov 6 23:06:30.184173 systemd[1]: session-13.scope: Deactivated successfully. Nov 6 23:06:30.184797 systemd-logind[1468]: Session 13 logged out. Waiting for processes to exit. Nov 6 23:06:30.185576 systemd-logind[1468]: Removed session 13. Nov 6 23:06:35.195142 systemd[1]: Started sshd@13-10.0.0.7:22-10.0.0.1:58102.service - OpenSSH per-connection server daemon (10.0.0.1:58102). Nov 6 23:06:35.244182 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 58102 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:06:35.246073 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:06:35.250842 systemd-logind[1468]: New session 14 of user core. Nov 6 23:06:35.261018 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 6 23:06:35.381407 sshd[4109]: Connection closed by 10.0.0.1 port 58102 Nov 6 23:06:35.381754 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Nov 6 23:06:35.385100 systemd-logind[1468]: Session 14 logged out. Waiting for processes to exit. Nov 6 23:06:35.385976 systemd[1]: sshd@13-10.0.0.7:22-10.0.0.1:58102.service: Deactivated successfully. Nov 6 23:06:35.388464 systemd[1]: session-14.scope: Deactivated successfully. Nov 6 23:06:35.389195 systemd-logind[1468]: Removed session 14. Nov 6 23:06:40.393090 systemd[1]: Started sshd@14-10.0.0.7:22-10.0.0.1:48314.service - OpenSSH per-connection server daemon (10.0.0.1:48314). Nov 6 23:06:40.431852 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 48314 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:06:40.433232 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:06:40.436863 systemd-logind[1468]: New session 15 of user core. Nov 6 23:06:40.443911 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 6 23:06:40.551153 sshd[4124]: Connection closed by 10.0.0.1 port 48314 Nov 6 23:06:40.551694 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Nov 6 23:06:40.564306 systemd[1]: sshd@14-10.0.0.7:22-10.0.0.1:48314.service: Deactivated successfully. Nov 6 23:06:40.565764 systemd[1]: session-15.scope: Deactivated successfully. Nov 6 23:06:40.566393 systemd-logind[1468]: Session 15 logged out. Waiting for processes to exit. Nov 6 23:06:40.573012 systemd[1]: Started sshd@15-10.0.0.7:22-10.0.0.1:48316.service - OpenSSH per-connection server daemon (10.0.0.1:48316). Nov 6 23:06:40.574233 systemd-logind[1468]: Removed session 15. Nov 6 23:06:40.607872 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 48316 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:06:40.609065 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:06:40.613846 systemd-logind[1468]: New session 16 of user core. Nov 6 23:06:40.623948 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 6 23:06:40.807487 sshd[4139]: Connection closed by 10.0.0.1 port 48316 Nov 6 23:06:40.808192 sshd-session[4136]: pam_unix(sshd:session): session closed for user core Nov 6 23:06:40.820426 systemd[1]: sshd@15-10.0.0.7:22-10.0.0.1:48316.service: Deactivated successfully. Nov 6 23:06:40.822306 systemd[1]: session-16.scope: Deactivated successfully. Nov 6 23:06:40.823012 systemd-logind[1468]: Session 16 logged out. Waiting for processes to exit. Nov 6 23:06:40.830175 systemd[1]: Started sshd@16-10.0.0.7:22-10.0.0.1:48322.service - OpenSSH per-connection server daemon (10.0.0.1:48322). Nov 6 23:06:40.831675 systemd-logind[1468]: Removed session 16. Nov 6 23:06:40.873938 sshd[4151]: Accepted publickey for core from 10.0.0.1 port 48322 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:06:40.875377 sshd-session[4151]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:06:40.879582 systemd-logind[1468]: New session 17 of user core. Nov 6 23:06:40.891935 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 6 23:06:41.429126 sshd[4154]: Connection closed by 10.0.0.1 port 48322 Nov 6 23:06:41.429597 sshd-session[4151]: pam_unix(sshd:session): session closed for user core Nov 6 23:06:41.443349 systemd[1]: sshd@16-10.0.0.7:22-10.0.0.1:48322.service: Deactivated successfully. Nov 6 23:06:41.448129 systemd[1]: session-17.scope: Deactivated successfully. Nov 6 23:06:41.450294 systemd-logind[1468]: Session 17 logged out. Waiting for processes to exit. Nov 6 23:06:41.462075 systemd[1]: Started sshd@17-10.0.0.7:22-10.0.0.1:48330.service - OpenSSH per-connection server daemon (10.0.0.1:48330). Nov 6 23:06:41.463580 systemd-logind[1468]: Removed session 17. Nov 6 23:06:41.501173 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 48330 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:06:41.502514 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:06:41.506930 systemd-logind[1468]: New session 18 of user core. Nov 6 23:06:41.512916 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 6 23:06:41.721616 sshd[4175]: Connection closed by 10.0.0.1 port 48330 Nov 6 23:06:41.722058 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Nov 6 23:06:41.731481 systemd[1]: sshd@17-10.0.0.7:22-10.0.0.1:48330.service: Deactivated successfully. Nov 6 23:06:41.733905 systemd[1]: session-18.scope: Deactivated successfully. Nov 6 23:06:41.734745 systemd-logind[1468]: Session 18 logged out. Waiting for processes to exit. Nov 6 23:06:41.741063 systemd[1]: Started sshd@18-10.0.0.7:22-10.0.0.1:48338.service - OpenSSH per-connection server daemon (10.0.0.1:48338). Nov 6 23:06:41.742149 systemd-logind[1468]: Removed session 18. Nov 6 23:06:41.776590 sshd[4186]: Accepted publickey for core from 10.0.0.1 port 48338 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:06:41.777881 sshd-session[4186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:06:41.781756 systemd-logind[1468]: New session 19 of user core. Nov 6 23:06:41.791968 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 6 23:06:41.901347 sshd[4189]: Connection closed by 10.0.0.1 port 48338 Nov 6 23:06:41.901687 sshd-session[4186]: pam_unix(sshd:session): session closed for user core Nov 6 23:06:41.905310 systemd[1]: sshd@18-10.0.0.7:22-10.0.0.1:48338.service: Deactivated successfully. Nov 6 23:06:41.907135 systemd[1]: session-19.scope: Deactivated successfully. Nov 6 23:06:41.909121 systemd-logind[1468]: Session 19 logged out. Waiting for processes to exit. Nov 6 23:06:41.910035 systemd-logind[1468]: Removed session 19. Nov 6 23:06:46.921594 systemd[1]: Started sshd@19-10.0.0.7:22-10.0.0.1:48350.service - OpenSSH per-connection server daemon (10.0.0.1:48350). Nov 6 23:06:46.959923 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 48350 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:06:46.961171 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:06:46.965626 systemd-logind[1468]: New session 20 of user core. Nov 6 23:06:46.975925 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 6 23:06:47.085416 sshd[4205]: Connection closed by 10.0.0.1 port 48350 Nov 6 23:06:47.085758 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Nov 6 23:06:47.089240 systemd[1]: sshd@19-10.0.0.7:22-10.0.0.1:48350.service: Deactivated successfully. Nov 6 23:06:47.091016 systemd[1]: session-20.scope: Deactivated successfully. Nov 6 23:06:47.091713 systemd-logind[1468]: Session 20 logged out. Waiting for processes to exit. Nov 6 23:06:47.092539 systemd-logind[1468]: Removed session 20. Nov 6 23:06:52.101626 systemd[1]: Started sshd@20-10.0.0.7:22-10.0.0.1:54294.service - OpenSSH per-connection server daemon (10.0.0.1:54294). Nov 6 23:06:52.144023 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 54294 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:06:52.145287 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:06:52.151547 systemd-logind[1468]: New session 21 of user core. Nov 6 23:06:52.158942 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 6 23:06:52.277748 sshd[4224]: Connection closed by 10.0.0.1 port 54294 Nov 6 23:06:52.278102 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Nov 6 23:06:52.281273 systemd[1]: sshd@20-10.0.0.7:22-10.0.0.1:54294.service: Deactivated successfully. Nov 6 23:06:52.283108 systemd[1]: session-21.scope: Deactivated successfully. Nov 6 23:06:52.283803 systemd-logind[1468]: Session 21 logged out. Waiting for processes to exit. Nov 6 23:06:52.284504 systemd-logind[1468]: Removed session 21. Nov 6 23:06:57.289684 systemd[1]: Started sshd@21-10.0.0.7:22-10.0.0.1:54302.service - OpenSSH per-connection server daemon (10.0.0.1:54302). Nov 6 23:06:57.328091 sshd[4239]: Accepted publickey for core from 10.0.0.1 port 54302 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:06:57.329461 sshd-session[4239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:06:57.335502 systemd-logind[1468]: New session 22 of user core. Nov 6 23:06:57.342012 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 6 23:06:57.447862 sshd[4241]: Connection closed by 10.0.0.1 port 54302 Nov 6 23:06:57.447698 sshd-session[4239]: pam_unix(sshd:session): session closed for user core Nov 6 23:06:57.451329 systemd[1]: sshd@21-10.0.0.7:22-10.0.0.1:54302.service: Deactivated successfully. Nov 6 23:06:57.453034 systemd[1]: session-22.scope: Deactivated successfully. Nov 6 23:06:57.453692 systemd-logind[1468]: Session 22 logged out. Waiting for processes to exit. Nov 6 23:06:57.454432 systemd-logind[1468]: Removed session 22. Nov 6 23:06:59.186990 kubelet[2585]: E1106 23:06:59.186534 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:07:02.461171 systemd[1]: Started sshd@22-10.0.0.7:22-10.0.0.1:45154.service - OpenSSH per-connection server daemon (10.0.0.1:45154). Nov 6 23:07:02.499714 sshd[4254]: Accepted publickey for core from 10.0.0.1 port 45154 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:07:02.500943 sshd-session[4254]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:07:02.504814 systemd-logind[1468]: New session 23 of user core. Nov 6 23:07:02.515948 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 6 23:07:02.620248 sshd[4256]: Connection closed by 10.0.0.1 port 45154 Nov 6 23:07:02.620754 sshd-session[4254]: pam_unix(sshd:session): session closed for user core Nov 6 23:07:02.630988 systemd[1]: sshd@22-10.0.0.7:22-10.0.0.1:45154.service: Deactivated successfully. Nov 6 23:07:02.633273 systemd[1]: session-23.scope: Deactivated successfully. Nov 6 23:07:02.633953 systemd-logind[1468]: Session 23 logged out. Waiting for processes to exit. Nov 6 23:07:02.640085 systemd[1]: Started sshd@23-10.0.0.7:22-10.0.0.1:45170.service - OpenSSH per-connection server daemon (10.0.0.1:45170). Nov 6 23:07:02.641133 systemd-logind[1468]: Removed session 23. Nov 6 23:07:02.674574 sshd[4268]: Accepted publickey for core from 10.0.0.1 port 45170 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:07:02.675728 sshd-session[4268]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:07:02.679839 systemd-logind[1468]: New session 24 of user core. Nov 6 23:07:02.686970 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 6 23:07:05.090807 kubelet[2585]: I1106 23:07:05.089875 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hw48r" podStartSLOduration=73.089857093 podStartE2EDuration="1m13.089857093s" podCreationTimestamp="2025-11-06 23:05:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:06:14.323446282 +0000 UTC m=+27.215328458" watchObservedRunningTime="2025-11-06 23:07:05.089857093 +0000 UTC m=+77.981739229" Nov 6 23:07:05.098570 containerd[1481]: time="2025-11-06T23:07:05.098503762Z" level=info msg="StopContainer for \"def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579\" with timeout 30 (s)" Nov 6 23:07:05.099576 containerd[1481]: time="2025-11-06T23:07:05.099487911Z" level=info msg="Stop container \"def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579\" with signal terminated" Nov 6 23:07:05.113103 systemd[1]: cri-containerd-def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579.scope: Deactivated successfully. Nov 6 23:07:05.137730 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579-rootfs.mount: Deactivated successfully. Nov 6 23:07:05.139184 containerd[1481]: time="2025-11-06T23:07:05.138971576Z" level=info msg="StopContainer for \"6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653\" with timeout 2 (s)" Nov 6 23:07:05.139736 containerd[1481]: time="2025-11-06T23:07:05.139527970Z" level=info msg="Stop container \"6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653\" with signal terminated" Nov 6 23:07:05.142600 containerd[1481]: time="2025-11-06T23:07:05.142558218Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 6 23:07:05.145176 systemd-networkd[1401]: lxc_health: Link DOWN Nov 6 23:07:05.145183 systemd-networkd[1401]: lxc_health: Lost carrier Nov 6 23:07:05.148784 containerd[1481]: time="2025-11-06T23:07:05.148651914Z" level=info msg="shim disconnected" id=def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579 namespace=k8s.io Nov 6 23:07:05.148784 containerd[1481]: time="2025-11-06T23:07:05.148701633Z" level=warning msg="cleaning up after shim disconnected" id=def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579 namespace=k8s.io Nov 6 23:07:05.148784 containerd[1481]: time="2025-11-06T23:07:05.148712153Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:07:05.164340 systemd[1]: cri-containerd-6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653.scope: Deactivated successfully. Nov 6 23:07:05.164756 systemd[1]: cri-containerd-6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653.scope: Consumed 6.233s CPU time, 126.2M memory peak, 152K read from disk, 12.9M written to disk. Nov 6 23:07:05.202067 containerd[1481]: time="2025-11-06T23:07:05.202008192Z" level=info msg="StopContainer for \"def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579\" returns successfully" Nov 6 23:07:05.203247 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653-rootfs.mount: Deactivated successfully. Nov 6 23:07:05.204205 containerd[1481]: time="2025-11-06T23:07:05.204155729Z" level=info msg="StopPodSandbox for \"765e50336204eb586c43f2ae6880425fc2b490f1d452c1a51c57f18457f7321d\"" Nov 6 23:07:05.204290 containerd[1481]: time="2025-11-06T23:07:05.204228449Z" level=info msg="Container to stop \"def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:07:05.205731 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-765e50336204eb586c43f2ae6880425fc2b490f1d452c1a51c57f18457f7321d-shm.mount: Deactivated successfully. Nov 6 23:07:05.212093 systemd[1]: cri-containerd-765e50336204eb586c43f2ae6880425fc2b490f1d452c1a51c57f18457f7321d.scope: Deactivated successfully. Nov 6 23:07:05.214323 containerd[1481]: time="2025-11-06T23:07:05.213429992Z" level=info msg="shim disconnected" id=6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653 namespace=k8s.io Nov 6 23:07:05.214323 containerd[1481]: time="2025-11-06T23:07:05.213475831Z" level=warning msg="cleaning up after shim disconnected" id=6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653 namespace=k8s.io Nov 6 23:07:05.214323 containerd[1481]: time="2025-11-06T23:07:05.213486271Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:07:05.228814 containerd[1481]: time="2025-11-06T23:07:05.228741511Z" level=info msg="StopContainer for \"6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653\" returns successfully" Nov 6 23:07:05.229524 containerd[1481]: time="2025-11-06T23:07:05.229311265Z" level=info msg="StopPodSandbox for \"a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d\"" Nov 6 23:07:05.229524 containerd[1481]: time="2025-11-06T23:07:05.229353784Z" level=info msg="Container to stop \"cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:07:05.229524 containerd[1481]: time="2025-11-06T23:07:05.229425463Z" level=info msg="Container to stop \"0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:07:05.229524 containerd[1481]: time="2025-11-06T23:07:05.229460943Z" level=info msg="Container to stop \"6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:07:05.229524 containerd[1481]: time="2025-11-06T23:07:05.229471703Z" level=info msg="Container to stop \"aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:07:05.229524 containerd[1481]: time="2025-11-06T23:07:05.229480423Z" level=info msg="Container to stop \"7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Nov 6 23:07:05.231349 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d-shm.mount: Deactivated successfully. Nov 6 23:07:05.236761 systemd[1]: cri-containerd-a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d.scope: Deactivated successfully. Nov 6 23:07:05.239257 containerd[1481]: time="2025-11-06T23:07:05.239200121Z" level=info msg="shim disconnected" id=765e50336204eb586c43f2ae6880425fc2b490f1d452c1a51c57f18457f7321d namespace=k8s.io Nov 6 23:07:05.239257 containerd[1481]: time="2025-11-06T23:07:05.239252000Z" level=warning msg="cleaning up after shim disconnected" id=765e50336204eb586c43f2ae6880425fc2b490f1d452c1a51c57f18457f7321d namespace=k8s.io Nov 6 23:07:05.239376 containerd[1481]: time="2025-11-06T23:07:05.239261040Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:07:05.252797 containerd[1481]: time="2025-11-06T23:07:05.252377462Z" level=info msg="TearDown network for sandbox \"765e50336204eb586c43f2ae6880425fc2b490f1d452c1a51c57f18457f7321d\" successfully" Nov 6 23:07:05.252797 containerd[1481]: time="2025-11-06T23:07:05.252405662Z" level=info msg="StopPodSandbox for \"765e50336204eb586c43f2ae6880425fc2b490f1d452c1a51c57f18457f7321d\" returns successfully" Nov 6 23:07:05.266916 containerd[1481]: time="2025-11-06T23:07:05.266687231Z" level=info msg="shim disconnected" id=a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d namespace=k8s.io Nov 6 23:07:05.266916 containerd[1481]: time="2025-11-06T23:07:05.266742791Z" level=warning msg="cleaning up after shim disconnected" id=a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d namespace=k8s.io Nov 6 23:07:05.266916 containerd[1481]: time="2025-11-06T23:07:05.266752991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:07:05.278072 containerd[1481]: time="2025-11-06T23:07:05.277955433Z" level=info msg="TearDown network for sandbox \"a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d\" successfully" Nov 6 23:07:05.278072 containerd[1481]: time="2025-11-06T23:07:05.277988472Z" level=info msg="StopPodSandbox for \"a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d\" returns successfully" Nov 6 23:07:05.389503 kubelet[2585]: I1106 23:07:05.389476 2585 scope.go:117] "RemoveContainer" containerID="def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579" Nov 6 23:07:05.391179 containerd[1481]: time="2025-11-06T23:07:05.391132441Z" level=info msg="RemoveContainer for \"def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579\"" Nov 6 23:07:05.396504 containerd[1481]: time="2025-11-06T23:07:05.396457665Z" level=info msg="RemoveContainer for \"def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579\" returns successfully" Nov 6 23:07:05.396807 kubelet[2585]: I1106 23:07:05.396786 2585 scope.go:117] "RemoveContainer" containerID="def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579" Nov 6 23:07:05.396986 containerd[1481]: time="2025-11-06T23:07:05.396954620Z" level=error msg="ContainerStatus for \"def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579\": not found" Nov 6 23:07:05.407049 kubelet[2585]: E1106 23:07:05.407002 2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579\": not found" containerID="def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579" Nov 6 23:07:05.411121 kubelet[2585]: I1106 23:07:05.411007 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579"} err="failed to get container status \"def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579\": rpc error: code = NotFound desc = an error occurred when try to find container \"def7cfdd4f26ec0c42e2fba6ca5cfdf21c745114df12186ecb7697603d85a579\": not found" Nov 6 23:07:05.411121 kubelet[2585]: I1106 23:07:05.411130 2585 scope.go:117] "RemoveContainer" containerID="6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653" Nov 6 23:07:05.412543 containerd[1481]: time="2025-11-06T23:07:05.412514096Z" level=info msg="RemoveContainer for \"6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653\"" Nov 6 23:07:05.415257 containerd[1481]: time="2025-11-06T23:07:05.415230988Z" level=info msg="RemoveContainer for \"6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653\" returns successfully" Nov 6 23:07:05.415449 kubelet[2585]: I1106 23:07:05.415416 2585 scope.go:117] "RemoveContainer" containerID="cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6" Nov 6 23:07:05.416509 containerd[1481]: time="2025-11-06T23:07:05.416294297Z" level=info msg="RemoveContainer for \"cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6\"" Nov 6 23:07:05.418738 containerd[1481]: time="2025-11-06T23:07:05.418707071Z" level=info msg="RemoveContainer for \"cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6\" returns successfully" Nov 6 23:07:05.419027 kubelet[2585]: I1106 23:07:05.419002 2585 scope.go:117] "RemoveContainer" containerID="7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c" Nov 6 23:07:05.420094 containerd[1481]: time="2025-11-06T23:07:05.419998138Z" level=info msg="RemoveContainer for \"7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c\"" Nov 6 23:07:05.422682 containerd[1481]: time="2025-11-06T23:07:05.422584910Z" level=info msg="RemoveContainer for \"7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c\" returns successfully" Nov 6 23:07:05.422752 kubelet[2585]: I1106 23:07:05.422726 2585 scope.go:117] "RemoveContainer" containerID="aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d" Nov 6 23:07:05.423913 containerd[1481]: time="2025-11-06T23:07:05.423669979Z" level=info msg="RemoveContainer for \"aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d\"" Nov 6 23:07:05.426139 kubelet[2585]: I1106 23:07:05.426106 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-lib-modules\") pod \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " Nov 6 23:07:05.426139 kubelet[2585]: I1106 23:07:05.426134 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-etc-cni-netd\") pod \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " Nov 6 23:07:05.426237 kubelet[2585]: I1106 23:07:05.426151 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cilium-cgroup\") pod \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " Nov 6 23:07:05.426237 kubelet[2585]: I1106 23:07:05.426169 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-bpf-maps\") pod \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " Nov 6 23:07:05.426237 kubelet[2585]: I1106 23:07:05.426189 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2411e4ec-6a37-41db-b6af-b79746a6273c-cilium-config-path\") pod \"2411e4ec-6a37-41db-b6af-b79746a6273c\" (UID: \"2411e4ec-6a37-41db-b6af-b79746a6273c\") " Nov 6 23:07:05.426237 kubelet[2585]: I1106 23:07:05.426207 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cilium-run\") pod \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " Nov 6 23:07:05.426237 kubelet[2585]: I1106 23:07:05.426225 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cilium-config-path\") pod \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " Nov 6 23:07:05.426356 kubelet[2585]: I1106 23:07:05.426242 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-host-proc-sys-net\") pod \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " Nov 6 23:07:05.426356 kubelet[2585]: I1106 23:07:05.426266 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4v9nl\" (UniqueName: \"kubernetes.io/projected/2411e4ec-6a37-41db-b6af-b79746a6273c-kube-api-access-4v9nl\") pod \"2411e4ec-6a37-41db-b6af-b79746a6273c\" (UID: \"2411e4ec-6a37-41db-b6af-b79746a6273c\") " Nov 6 23:07:05.426356 kubelet[2585]: I1106 23:07:05.426285 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cni-path\") pod \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " Nov 6 23:07:05.426356 kubelet[2585]: I1106 23:07:05.426303 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3647c86-cc77-468f-8b6a-a2c2b794bf85-hubble-tls\") pod \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " Nov 6 23:07:05.426356 kubelet[2585]: I1106 23:07:05.426321 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p7d5f\" (UniqueName: \"kubernetes.io/projected/e3647c86-cc77-468f-8b6a-a2c2b794bf85-kube-api-access-p7d5f\") pod \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " Nov 6 23:07:05.426356 kubelet[2585]: I1106 23:07:05.426338 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-host-proc-sys-kernel\") pod \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " Nov 6 23:07:05.426634 kubelet[2585]: I1106 23:07:05.426356 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3647c86-cc77-468f-8b6a-a2c2b794bf85-clustermesh-secrets\") pod \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " Nov 6 23:07:05.426634 kubelet[2585]: I1106 23:07:05.426371 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-xtables-lock\") pod \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " Nov 6 23:07:05.426634 kubelet[2585]: I1106 23:07:05.426386 2585 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-hostproc\") pod \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\" (UID: \"e3647c86-cc77-468f-8b6a-a2c2b794bf85\") " Nov 6 23:07:05.428758 kubelet[2585]: I1106 23:07:05.428496 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e3647c86-cc77-468f-8b6a-a2c2b794bf85" (UID: "e3647c86-cc77-468f-8b6a-a2c2b794bf85"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:07:05.428758 kubelet[2585]: I1106 23:07:05.428498 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e3647c86-cc77-468f-8b6a-a2c2b794bf85" (UID: "e3647c86-cc77-468f-8b6a-a2c2b794bf85"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:07:05.428758 kubelet[2585]: I1106 23:07:05.428531 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e3647c86-cc77-468f-8b6a-a2c2b794bf85" (UID: "e3647c86-cc77-468f-8b6a-a2c2b794bf85"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:07:05.428758 kubelet[2585]: I1106 23:07:05.428558 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e3647c86-cc77-468f-8b6a-a2c2b794bf85" (UID: "e3647c86-cc77-468f-8b6a-a2c2b794bf85"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:07:05.428758 kubelet[2585]: I1106 23:07:05.428573 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e3647c86-cc77-468f-8b6a-a2c2b794bf85" (UID: "e3647c86-cc77-468f-8b6a-a2c2b794bf85"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:07:05.428922 kubelet[2585]: I1106 23:07:05.428739 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-hostproc" (OuterVolumeSpecName: "hostproc") pod "e3647c86-cc77-468f-8b6a-a2c2b794bf85" (UID: "e3647c86-cc77-468f-8b6a-a2c2b794bf85"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:07:05.428922 kubelet[2585]: I1106 23:07:05.428795 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e3647c86-cc77-468f-8b6a-a2c2b794bf85" (UID: "e3647c86-cc77-468f-8b6a-a2c2b794bf85"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:07:05.434651 kubelet[2585]: I1106 23:07:05.434358 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e3647c86-cc77-468f-8b6a-a2c2b794bf85" (UID: "e3647c86-cc77-468f-8b6a-a2c2b794bf85"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:07:05.434651 kubelet[2585]: I1106 23:07:05.428500 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e3647c86-cc77-468f-8b6a-a2c2b794bf85" (UID: "e3647c86-cc77-468f-8b6a-a2c2b794bf85"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:07:05.434651 kubelet[2585]: I1106 23:07:05.434442 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cni-path" (OuterVolumeSpecName: "cni-path") pod "e3647c86-cc77-468f-8b6a-a2c2b794bf85" (UID: "e3647c86-cc77-468f-8b6a-a2c2b794bf85"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:07:05.438902 kubelet[2585]: I1106 23:07:05.438356 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e3647c86-cc77-468f-8b6a-a2c2b794bf85" (UID: "e3647c86-cc77-468f-8b6a-a2c2b794bf85"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Nov 6 23:07:05.438902 kubelet[2585]: I1106 23:07:05.438466 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3647c86-cc77-468f-8b6a-a2c2b794bf85-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e3647c86-cc77-468f-8b6a-a2c2b794bf85" (UID: "e3647c86-cc77-468f-8b6a-a2c2b794bf85"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Nov 6 23:07:05.441203 kubelet[2585]: I1106 23:07:05.440216 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3647c86-cc77-468f-8b6a-a2c2b794bf85-kube-api-access-p7d5f" (OuterVolumeSpecName: "kube-api-access-p7d5f") pod "e3647c86-cc77-468f-8b6a-a2c2b794bf85" (UID: "e3647c86-cc77-468f-8b6a-a2c2b794bf85"). InnerVolumeSpecName "kube-api-access-p7d5f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:07:05.445642 kubelet[2585]: I1106 23:07:05.445601 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2411e4ec-6a37-41db-b6af-b79746a6273c-kube-api-access-4v9nl" (OuterVolumeSpecName: "kube-api-access-4v9nl") pod "2411e4ec-6a37-41db-b6af-b79746a6273c" (UID: "2411e4ec-6a37-41db-b6af-b79746a6273c"). InnerVolumeSpecName "kube-api-access-4v9nl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:07:05.445820 kubelet[2585]: I1106 23:07:05.445799 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3647c86-cc77-468f-8b6a-a2c2b794bf85-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e3647c86-cc77-468f-8b6a-a2c2b794bf85" (UID: "e3647c86-cc77-468f-8b6a-a2c2b794bf85"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Nov 6 23:07:05.445857 containerd[1481]: time="2025-11-06T23:07:05.445830026Z" level=info msg="RemoveContainer for \"aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d\" returns successfully" Nov 6 23:07:05.446082 kubelet[2585]: I1106 23:07:05.446058 2585 scope.go:117] "RemoveContainer" containerID="0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd" Nov 6 23:07:05.448227 containerd[1481]: time="2025-11-06T23:07:05.448194921Z" level=info msg="RemoveContainer for \"0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd\"" Nov 6 23:07:05.451476 kubelet[2585]: I1106 23:07:05.451437 2585 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2411e4ec-6a37-41db-b6af-b79746a6273c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2411e4ec-6a37-41db-b6af-b79746a6273c" (UID: "2411e4ec-6a37-41db-b6af-b79746a6273c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Nov 6 23:07:05.453043 containerd[1481]: time="2025-11-06T23:07:05.452998470Z" level=info msg="RemoveContainer for \"0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd\" returns successfully" Nov 6 23:07:05.453335 kubelet[2585]: I1106 23:07:05.453303 2585 scope.go:117] "RemoveContainer" containerID="6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653" Nov 6 23:07:05.453554 containerd[1481]: time="2025-11-06T23:07:05.453522265Z" level=error msg="ContainerStatus for \"6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653\": not found" Nov 6 23:07:05.453833 kubelet[2585]: E1106 23:07:05.453685 2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653\": not found" containerID="6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653" Nov 6 23:07:05.453833 kubelet[2585]: I1106 23:07:05.453718 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653"} err="failed to get container status \"6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d7e0ec96e556ec8bb5136fe9ef76f2a57762ca64b39cc59c85a9636a68a2653\": not found" Nov 6 23:07:05.453833 kubelet[2585]: I1106 23:07:05.453740 2585 scope.go:117] "RemoveContainer" containerID="cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6" Nov 6 23:07:05.453947 containerd[1481]: time="2025-11-06T23:07:05.453923821Z" level=error msg="ContainerStatus for \"cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6\": not found" Nov 6 23:07:05.454073 kubelet[2585]: E1106 23:07:05.454039 2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6\": not found" containerID="cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6" Nov 6 23:07:05.454120 kubelet[2585]: I1106 23:07:05.454080 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6"} err="failed to get container status \"cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"cce92b55593a49de936eb5aac88f93961fe80633e62f9fa2681ead7969ea50c6\": not found" Nov 6 23:07:05.454120 kubelet[2585]: I1106 23:07:05.454109 2585 scope.go:117] "RemoveContainer" containerID="7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c" Nov 6 23:07:05.454275 containerd[1481]: time="2025-11-06T23:07:05.454244617Z" level=error msg="ContainerStatus for \"7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c\": not found" Nov 6 23:07:05.454357 kubelet[2585]: E1106 23:07:05.454340 2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c\": not found" containerID="7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c" Nov 6 23:07:05.454410 kubelet[2585]: I1106 23:07:05.454362 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c"} err="failed to get container status \"7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c\": rpc error: code = NotFound desc = an error occurred when try to find container \"7dbd9daa986cd2edf78ca8937d2ed8d6f0c496d30e3fea581f5029c4289a3b2c\": not found" Nov 6 23:07:05.454484 kubelet[2585]: I1106 23:07:05.454410 2585 scope.go:117] "RemoveContainer" containerID="aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d" Nov 6 23:07:05.454570 containerd[1481]: time="2025-11-06T23:07:05.454541494Z" level=error msg="ContainerStatus for \"aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d\": not found" Nov 6 23:07:05.454648 kubelet[2585]: E1106 23:07:05.454630 2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d\": not found" containerID="aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d" Nov 6 23:07:05.454691 kubelet[2585]: I1106 23:07:05.454651 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d"} err="failed to get container status \"aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d\": rpc error: code = NotFound desc = an error occurred when try to find container \"aaa358698fcdd347e5a9c31b15929a91dd63bc5305a61fd9a48ad7606c68814d\": not found" Nov 6 23:07:05.454691 kubelet[2585]: I1106 23:07:05.454667 2585 scope.go:117] "RemoveContainer" containerID="0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd" Nov 6 23:07:05.454822 containerd[1481]: time="2025-11-06T23:07:05.454796691Z" level=error msg="ContainerStatus for \"0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd\": not found" Nov 6 23:07:05.454895 kubelet[2585]: E1106 23:07:05.454880 2585 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd\": not found" containerID="0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd" Nov 6 23:07:05.454940 kubelet[2585]: I1106 23:07:05.454897 2585 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd"} err="failed to get container status \"0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd\": rpc error: code = NotFound desc = an error occurred when try to find container \"0c7d11c016390e5d91e995343306dd3a0734aa7b10e2fe4ab0b6e0671638bdcd\": not found" Nov 6 23:07:05.527268 kubelet[2585]: I1106 23:07:05.527216 2585 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.527268 kubelet[2585]: I1106 23:07:05.527248 2585 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.527268 kubelet[2585]: I1106 23:07:05.527264 2585 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cni-path\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.527268 kubelet[2585]: I1106 23:07:05.527274 2585 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3647c86-cc77-468f-8b6a-a2c2b794bf85-hubble-tls\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.527268 kubelet[2585]: I1106 23:07:05.527283 2585 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-p7d5f\" (UniqueName: \"kubernetes.io/projected/e3647c86-cc77-468f-8b6a-a2c2b794bf85-kube-api-access-p7d5f\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.527489 kubelet[2585]: I1106 23:07:05.527291 2585 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.527489 kubelet[2585]: I1106 23:07:05.527300 2585 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4v9nl\" (UniqueName: \"kubernetes.io/projected/2411e4ec-6a37-41db-b6af-b79746a6273c-kube-api-access-4v9nl\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.527489 kubelet[2585]: I1106 23:07:05.527307 2585 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-xtables-lock\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.527489 kubelet[2585]: I1106 23:07:05.527315 2585 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-hostproc\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.527489 kubelet[2585]: I1106 23:07:05.527322 2585 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3647c86-cc77-468f-8b6a-a2c2b794bf85-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.527489 kubelet[2585]: I1106 23:07:05.527329 2585 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-lib-modules\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.527489 kubelet[2585]: I1106 23:07:05.527336 2585 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.527489 kubelet[2585]: I1106 23:07:05.527344 2585 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.527640 kubelet[2585]: I1106 23:07:05.527351 2585 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-bpf-maps\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.527640 kubelet[2585]: I1106 23:07:05.527359 2585 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2411e4ec-6a37-41db-b6af-b79746a6273c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.527640 kubelet[2585]: I1106 23:07:05.527368 2585 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3647c86-cc77-468f-8b6a-a2c2b794bf85-cilium-run\") on node \"localhost\" DevicePath \"\"" Nov 6 23:07:05.694051 systemd[1]: Removed slice kubepods-besteffort-pod2411e4ec_6a37_41db_b6af_b79746a6273c.slice - libcontainer container kubepods-besteffort-pod2411e4ec_6a37_41db_b6af_b79746a6273c.slice. Nov 6 23:07:05.700176 systemd[1]: Removed slice kubepods-burstable-pode3647c86_cc77_468f_8b6a_a2c2b794bf85.slice - libcontainer container kubepods-burstable-pode3647c86_cc77_468f_8b6a_a2c2b794bf85.slice. Nov 6 23:07:05.700291 systemd[1]: kubepods-burstable-pode3647c86_cc77_468f_8b6a_a2c2b794bf85.slice: Consumed 6.306s CPU time, 126.5M memory peak, 164K read from disk, 12.9M written to disk. Nov 6 23:07:06.118934 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a910e26eae4dbb6c3e9a0cdbc682d5c2334ab382bbcf50706dc4b03f3b58951d-rootfs.mount: Deactivated successfully. Nov 6 23:07:06.119038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-765e50336204eb586c43f2ae6880425fc2b490f1d452c1a51c57f18457f7321d-rootfs.mount: Deactivated successfully. Nov 6 23:07:06.119088 systemd[1]: var-lib-kubelet-pods-e3647c86\x2dcc77\x2d468f\x2d8b6a\x2da2c2b794bf85-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp7d5f.mount: Deactivated successfully. Nov 6 23:07:06.119144 systemd[1]: var-lib-kubelet-pods-2411e4ec\x2d6a37\x2d41db\x2db6af\x2db79746a6273c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4v9nl.mount: Deactivated successfully. Nov 6 23:07:06.119201 systemd[1]: var-lib-kubelet-pods-e3647c86\x2dcc77\x2d468f\x2d8b6a\x2da2c2b794bf85-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Nov 6 23:07:06.119259 systemd[1]: var-lib-kubelet-pods-e3647c86\x2dcc77\x2d468f\x2d8b6a\x2da2c2b794bf85-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Nov 6 23:07:07.040762 sshd[4271]: Connection closed by 10.0.0.1 port 45170 Nov 6 23:07:07.042141 sshd-session[4268]: pam_unix(sshd:session): session closed for user core Nov 6 23:07:07.049620 systemd[1]: sshd@23-10.0.0.7:22-10.0.0.1:45170.service: Deactivated successfully. Nov 6 23:07:07.051555 systemd[1]: session-24.scope: Deactivated successfully. Nov 6 23:07:07.051858 systemd[1]: session-24.scope: Consumed 1.727s CPU time, 28.5M memory peak. Nov 6 23:07:07.052366 systemd-logind[1468]: Session 24 logged out. Waiting for processes to exit. Nov 6 23:07:07.058114 systemd[1]: Started sshd@24-10.0.0.7:22-10.0.0.1:45182.service - OpenSSH per-connection server daemon (10.0.0.1:45182). Nov 6 23:07:07.059294 systemd-logind[1468]: Removed session 24. Nov 6 23:07:07.101430 sshd[4431]: Accepted publickey for core from 10.0.0.1 port 45182 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:07:07.102872 sshd-session[4431]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:07:07.107145 systemd-logind[1468]: New session 25 of user core. Nov 6 23:07:07.113998 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 6 23:07:07.186534 kubelet[2585]: I1106 23:07:07.186485 2585 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2411e4ec-6a37-41db-b6af-b79746a6273c" path="/var/lib/kubelet/pods/2411e4ec-6a37-41db-b6af-b79746a6273c/volumes" Nov 6 23:07:07.187793 kubelet[2585]: I1106 23:07:07.186944 2585 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3647c86-cc77-468f-8b6a-a2c2b794bf85" path="/var/lib/kubelet/pods/e3647c86-cc77-468f-8b6a-a2c2b794bf85/volumes" Nov 6 23:07:07.260929 kubelet[2585]: E1106 23:07:07.260886 2585 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 23:07:07.997374 sshd[4434]: Connection closed by 10.0.0.1 port 45182 Nov 6 23:07:07.996561 sshd-session[4431]: pam_unix(sshd:session): session closed for user core Nov 6 23:07:08.008050 systemd[1]: session-25.scope: Deactivated successfully. Nov 6 23:07:08.009419 systemd[1]: sshd@24-10.0.0.7:22-10.0.0.1:45182.service: Deactivated successfully. Nov 6 23:07:08.016841 systemd-logind[1468]: Session 25 logged out. Waiting for processes to exit. Nov 6 23:07:08.019003 systemd-logind[1468]: Removed session 25. Nov 6 23:07:08.026503 kubelet[2585]: I1106 23:07:08.026462 2585 memory_manager.go:355] "RemoveStaleState removing state" podUID="2411e4ec-6a37-41db-b6af-b79746a6273c" containerName="cilium-operator" Nov 6 23:07:08.026503 kubelet[2585]: I1106 23:07:08.026496 2585 memory_manager.go:355] "RemoveStaleState removing state" podUID="e3647c86-cc77-468f-8b6a-a2c2b794bf85" containerName="cilium-agent" Nov 6 23:07:08.032112 systemd[1]: Started sshd@25-10.0.0.7:22-10.0.0.1:45184.service - OpenSSH per-connection server daemon (10.0.0.1:45184). Nov 6 23:07:08.040543 kubelet[2585]: I1106 23:07:08.040502 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/711e21cd-a033-4c4b-925d-198ccfee7a83-etc-cni-netd\") pod \"cilium-646pr\" (UID: \"711e21cd-a033-4c4b-925d-198ccfee7a83\") " pod="kube-system/cilium-646pr" Nov 6 23:07:08.040543 kubelet[2585]: I1106 23:07:08.040539 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/711e21cd-a033-4c4b-925d-198ccfee7a83-cni-path\") pod \"cilium-646pr\" (UID: \"711e21cd-a033-4c4b-925d-198ccfee7a83\") " pod="kube-system/cilium-646pr" Nov 6 23:07:08.040670 kubelet[2585]: I1106 23:07:08.040560 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv9pc\" (UniqueName: \"kubernetes.io/projected/711e21cd-a033-4c4b-925d-198ccfee7a83-kube-api-access-xv9pc\") pod \"cilium-646pr\" (UID: \"711e21cd-a033-4c4b-925d-198ccfee7a83\") " pod="kube-system/cilium-646pr" Nov 6 23:07:08.040670 kubelet[2585]: I1106 23:07:08.040584 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/711e21cd-a033-4c4b-925d-198ccfee7a83-host-proc-sys-net\") pod \"cilium-646pr\" (UID: \"711e21cd-a033-4c4b-925d-198ccfee7a83\") " pod="kube-system/cilium-646pr" Nov 6 23:07:08.040670 kubelet[2585]: I1106 23:07:08.040599 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/711e21cd-a033-4c4b-925d-198ccfee7a83-host-proc-sys-kernel\") pod \"cilium-646pr\" (UID: \"711e21cd-a033-4c4b-925d-198ccfee7a83\") " pod="kube-system/cilium-646pr" Nov 6 23:07:08.040670 kubelet[2585]: I1106 23:07:08.040615 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/711e21cd-a033-4c4b-925d-198ccfee7a83-hostproc\") pod \"cilium-646pr\" (UID: \"711e21cd-a033-4c4b-925d-198ccfee7a83\") " pod="kube-system/cilium-646pr" Nov 6 23:07:08.040670 kubelet[2585]: I1106 23:07:08.040630 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/711e21cd-a033-4c4b-925d-198ccfee7a83-cilium-config-path\") pod \"cilium-646pr\" (UID: \"711e21cd-a033-4c4b-925d-198ccfee7a83\") " pod="kube-system/cilium-646pr" Nov 6 23:07:08.040797 kubelet[2585]: I1106 23:07:08.040645 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/711e21cd-a033-4c4b-925d-198ccfee7a83-cilium-run\") pod \"cilium-646pr\" (UID: \"711e21cd-a033-4c4b-925d-198ccfee7a83\") " pod="kube-system/cilium-646pr" Nov 6 23:07:08.040797 kubelet[2585]: I1106 23:07:08.040659 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/711e21cd-a033-4c4b-925d-198ccfee7a83-lib-modules\") pod \"cilium-646pr\" (UID: \"711e21cd-a033-4c4b-925d-198ccfee7a83\") " pod="kube-system/cilium-646pr" Nov 6 23:07:08.040797 kubelet[2585]: I1106 23:07:08.040673 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/711e21cd-a033-4c4b-925d-198ccfee7a83-clustermesh-secrets\") pod \"cilium-646pr\" (UID: \"711e21cd-a033-4c4b-925d-198ccfee7a83\") " pod="kube-system/cilium-646pr" Nov 6 23:07:08.040797 kubelet[2585]: I1106 23:07:08.040689 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/711e21cd-a033-4c4b-925d-198ccfee7a83-hubble-tls\") pod \"cilium-646pr\" (UID: \"711e21cd-a033-4c4b-925d-198ccfee7a83\") " pod="kube-system/cilium-646pr" Nov 6 23:07:08.040797 kubelet[2585]: I1106 23:07:08.040707 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/711e21cd-a033-4c4b-925d-198ccfee7a83-bpf-maps\") pod \"cilium-646pr\" (UID: \"711e21cd-a033-4c4b-925d-198ccfee7a83\") " pod="kube-system/cilium-646pr" Nov 6 23:07:08.040797 kubelet[2585]: I1106 23:07:08.040722 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/711e21cd-a033-4c4b-925d-198ccfee7a83-xtables-lock\") pod \"cilium-646pr\" (UID: \"711e21cd-a033-4c4b-925d-198ccfee7a83\") " pod="kube-system/cilium-646pr" Nov 6 23:07:08.040924 kubelet[2585]: I1106 23:07:08.040738 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/711e21cd-a033-4c4b-925d-198ccfee7a83-cilium-cgroup\") pod \"cilium-646pr\" (UID: \"711e21cd-a033-4c4b-925d-198ccfee7a83\") " pod="kube-system/cilium-646pr" Nov 6 23:07:08.040924 kubelet[2585]: I1106 23:07:08.040753 2585 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/711e21cd-a033-4c4b-925d-198ccfee7a83-cilium-ipsec-secrets\") pod \"cilium-646pr\" (UID: \"711e21cd-a033-4c4b-925d-198ccfee7a83\") " pod="kube-system/cilium-646pr" Nov 6 23:07:08.046275 systemd[1]: Created slice kubepods-burstable-pod711e21cd_a033_4c4b_925d_198ccfee7a83.slice - libcontainer container kubepods-burstable-pod711e21cd_a033_4c4b_925d_198ccfee7a83.slice. Nov 6 23:07:08.078106 sshd[4446]: Accepted publickey for core from 10.0.0.1 port 45184 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:07:08.079410 sshd-session[4446]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:07:08.083901 systemd-logind[1468]: New session 26 of user core. Nov 6 23:07:08.095971 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 6 23:07:08.149815 sshd[4448]: Connection closed by 10.0.0.1 port 45184 Nov 6 23:07:08.151259 sshd-session[4446]: pam_unix(sshd:session): session closed for user core Nov 6 23:07:08.157616 systemd[1]: sshd@25-10.0.0.7:22-10.0.0.1:45184.service: Deactivated successfully. Nov 6 23:07:08.160403 systemd[1]: session-26.scope: Deactivated successfully. Nov 6 23:07:08.171976 systemd-logind[1468]: Session 26 logged out. Waiting for processes to exit. Nov 6 23:07:08.173652 systemd[1]: Started sshd@26-10.0.0.7:22-10.0.0.1:45196.service - OpenSSH per-connection server daemon (10.0.0.1:45196). Nov 6 23:07:08.174340 systemd-logind[1468]: Removed session 26. Nov 6 23:07:08.211501 sshd[4458]: Accepted publickey for core from 10.0.0.1 port 45196 ssh2: RSA SHA256:Ikhad6xsaZAyuqvAZruy0J7oy2IqTD7/hle70OigXJQ Nov 6 23:07:08.212585 sshd-session[4458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 6 23:07:08.216499 systemd-logind[1468]: New session 27 of user core. Nov 6 23:07:08.236965 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 6 23:07:08.348613 kubelet[2585]: E1106 23:07:08.348490 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:07:08.349362 containerd[1481]: time="2025-11-06T23:07:08.349324534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-646pr,Uid:711e21cd-a033-4c4b-925d-198ccfee7a83,Namespace:kube-system,Attempt:0,}" Nov 6 23:07:08.367612 containerd[1481]: time="2025-11-06T23:07:08.367521234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 6 23:07:08.367612 containerd[1481]: time="2025-11-06T23:07:08.367573314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 6 23:07:08.367612 containerd[1481]: time="2025-11-06T23:07:08.367591034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:07:08.367828 containerd[1481]: time="2025-11-06T23:07:08.367663033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 6 23:07:08.388359 systemd[1]: Started cri-containerd-5a5c4dc94886181457241adc66134b8bd5dcc44906554ebe0704a19d3b8b6158.scope - libcontainer container 5a5c4dc94886181457241adc66134b8bd5dcc44906554ebe0704a19d3b8b6158. Nov 6 23:07:08.407646 containerd[1481]: time="2025-11-06T23:07:08.407609118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-646pr,Uid:711e21cd-a033-4c4b-925d-198ccfee7a83,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a5c4dc94886181457241adc66134b8bd5dcc44906554ebe0704a19d3b8b6158\"" Nov 6 23:07:08.408471 kubelet[2585]: E1106 23:07:08.408449 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:07:08.411448 containerd[1481]: time="2025-11-06T23:07:08.411321521Z" level=info msg="CreateContainer within sandbox \"5a5c4dc94886181457241adc66134b8bd5dcc44906554ebe0704a19d3b8b6158\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Nov 6 23:07:08.432456 containerd[1481]: time="2025-11-06T23:07:08.432378433Z" level=info msg="CreateContainer within sandbox \"5a5c4dc94886181457241adc66134b8bd5dcc44906554ebe0704a19d3b8b6158\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"be23e4229174a7788f02d40fc468cce11e7c8fac9b36430b494cd875878fdff2\"" Nov 6 23:07:08.433350 containerd[1481]: time="2025-11-06T23:07:08.432929708Z" level=info msg="StartContainer for \"be23e4229174a7788f02d40fc468cce11e7c8fac9b36430b494cd875878fdff2\"" Nov 6 23:07:08.457950 systemd[1]: Started cri-containerd-be23e4229174a7788f02d40fc468cce11e7c8fac9b36430b494cd875878fdff2.scope - libcontainer container be23e4229174a7788f02d40fc468cce11e7c8fac9b36430b494cd875878fdff2. Nov 6 23:07:08.480747 containerd[1481]: time="2025-11-06T23:07:08.480691035Z" level=info msg="StartContainer for \"be23e4229174a7788f02d40fc468cce11e7c8fac9b36430b494cd875878fdff2\" returns successfully" Nov 6 23:07:08.489289 systemd[1]: cri-containerd-be23e4229174a7788f02d40fc468cce11e7c8fac9b36430b494cd875878fdff2.scope: Deactivated successfully. Nov 6 23:07:08.519395 containerd[1481]: time="2025-11-06T23:07:08.519104655Z" level=info msg="shim disconnected" id=be23e4229174a7788f02d40fc468cce11e7c8fac9b36430b494cd875878fdff2 namespace=k8s.io Nov 6 23:07:08.519395 containerd[1481]: time="2025-11-06T23:07:08.519231574Z" level=warning msg="cleaning up after shim disconnected" id=be23e4229174a7788f02d40fc468cce11e7c8fac9b36430b494cd875878fdff2 namespace=k8s.io Nov 6 23:07:08.519395 containerd[1481]: time="2025-11-06T23:07:08.519242294Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:07:09.100225 kubelet[2585]: I1106 23:07:09.100165 2585 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-11-06T23:07:09Z","lastTransitionTime":"2025-11-06T23:07:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Nov 6 23:07:09.183658 kubelet[2585]: E1106 23:07:09.183608 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:07:09.403066 kubelet[2585]: E1106 23:07:09.403027 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:07:09.405419 containerd[1481]: time="2025-11-06T23:07:09.405375090Z" level=info msg="CreateContainer within sandbox \"5a5c4dc94886181457241adc66134b8bd5dcc44906554ebe0704a19d3b8b6158\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Nov 6 23:07:09.422438 containerd[1481]: time="2025-11-06T23:07:09.422059048Z" level=info msg="CreateContainer within sandbox \"5a5c4dc94886181457241adc66134b8bd5dcc44906554ebe0704a19d3b8b6158\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"718dad296a3e65b6caef708a5184a6487aa3b2b46d4aad22dee769c5d9c39a41\"" Nov 6 23:07:09.423296 containerd[1481]: time="2025-11-06T23:07:09.423249757Z" level=info msg="StartContainer for \"718dad296a3e65b6caef708a5184a6487aa3b2b46d4aad22dee769c5d9c39a41\"" Nov 6 23:07:09.464984 systemd[1]: Started cri-containerd-718dad296a3e65b6caef708a5184a6487aa3b2b46d4aad22dee769c5d9c39a41.scope - libcontainer container 718dad296a3e65b6caef708a5184a6487aa3b2b46d4aad22dee769c5d9c39a41. Nov 6 23:07:09.486606 containerd[1481]: time="2025-11-06T23:07:09.486562623Z" level=info msg="StartContainer for \"718dad296a3e65b6caef708a5184a6487aa3b2b46d4aad22dee769c5d9c39a41\" returns successfully" Nov 6 23:07:09.491563 systemd[1]: cri-containerd-718dad296a3e65b6caef708a5184a6487aa3b2b46d4aad22dee769c5d9c39a41.scope: Deactivated successfully. Nov 6 23:07:09.513228 containerd[1481]: time="2025-11-06T23:07:09.513159485Z" level=info msg="shim disconnected" id=718dad296a3e65b6caef708a5184a6487aa3b2b46d4aad22dee769c5d9c39a41 namespace=k8s.io Nov 6 23:07:09.513577 containerd[1481]: time="2025-11-06T23:07:09.513417363Z" level=warning msg="cleaning up after shim disconnected" id=718dad296a3e65b6caef708a5184a6487aa3b2b46d4aad22dee769c5d9c39a41 namespace=k8s.io Nov 6 23:07:09.513577 containerd[1481]: time="2025-11-06T23:07:09.513436802Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:07:10.148810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-718dad296a3e65b6caef708a5184a6487aa3b2b46d4aad22dee769c5d9c39a41-rootfs.mount: Deactivated successfully. Nov 6 23:07:10.407443 kubelet[2585]: E1106 23:07:10.407292 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:07:10.410572 containerd[1481]: time="2025-11-06T23:07:10.410532987Z" level=info msg="CreateContainer within sandbox \"5a5c4dc94886181457241adc66134b8bd5dcc44906554ebe0704a19d3b8b6158\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Nov 6 23:07:10.427754 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1766568638.mount: Deactivated successfully. Nov 6 23:07:10.429894 containerd[1481]: time="2025-11-06T23:07:10.429854923Z" level=info msg="CreateContainer within sandbox \"5a5c4dc94886181457241adc66134b8bd5dcc44906554ebe0704a19d3b8b6158\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f8822b8f4d590bd33414ca94ef442ba1dc9e9e17402d023391d98f034da6da6d\"" Nov 6 23:07:10.430464 containerd[1481]: time="2025-11-06T23:07:10.430435598Z" level=info msg="StartContainer for \"f8822b8f4d590bd33414ca94ef442ba1dc9e9e17402d023391d98f034da6da6d\"" Nov 6 23:07:10.458940 systemd[1]: Started cri-containerd-f8822b8f4d590bd33414ca94ef442ba1dc9e9e17402d023391d98f034da6da6d.scope - libcontainer container f8822b8f4d590bd33414ca94ef442ba1dc9e9e17402d023391d98f034da6da6d. Nov 6 23:07:10.484216 containerd[1481]: time="2025-11-06T23:07:10.484164167Z" level=info msg="StartContainer for \"f8822b8f4d590bd33414ca94ef442ba1dc9e9e17402d023391d98f034da6da6d\" returns successfully" Nov 6 23:07:10.484281 systemd[1]: cri-containerd-f8822b8f4d590bd33414ca94ef442ba1dc9e9e17402d023391d98f034da6da6d.scope: Deactivated successfully. Nov 6 23:07:10.512260 containerd[1481]: time="2025-11-06T23:07:10.512200701Z" level=info msg="shim disconnected" id=f8822b8f4d590bd33414ca94ef442ba1dc9e9e17402d023391d98f034da6da6d namespace=k8s.io Nov 6 23:07:10.512260 containerd[1481]: time="2025-11-06T23:07:10.512256300Z" level=warning msg="cleaning up after shim disconnected" id=f8822b8f4d590bd33414ca94ef442ba1dc9e9e17402d023391d98f034da6da6d namespace=k8s.io Nov 6 23:07:10.512260 containerd[1481]: time="2025-11-06T23:07:10.512265540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:07:11.148902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8822b8f4d590bd33414ca94ef442ba1dc9e9e17402d023391d98f034da6da6d-rootfs.mount: Deactivated successfully. Nov 6 23:07:11.410493 kubelet[2585]: E1106 23:07:11.410326 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:07:11.413703 containerd[1481]: time="2025-11-06T23:07:11.413660295Z" level=info msg="CreateContainer within sandbox \"5a5c4dc94886181457241adc66134b8bd5dcc44906554ebe0704a19d3b8b6158\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Nov 6 23:07:11.432161 containerd[1481]: time="2025-11-06T23:07:11.432101203Z" level=info msg="CreateContainer within sandbox \"5a5c4dc94886181457241adc66134b8bd5dcc44906554ebe0704a19d3b8b6158\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fb6346e88f5a22f3c2fd658f373628d4f4ef464d6fb4f07d7b6c219144047142\"" Nov 6 23:07:11.433646 containerd[1481]: time="2025-11-06T23:07:11.432715917Z" level=info msg="StartContainer for \"fb6346e88f5a22f3c2fd658f373628d4f4ef464d6fb4f07d7b6c219144047142\"" Nov 6 23:07:11.462984 systemd[1]: Started cri-containerd-fb6346e88f5a22f3c2fd658f373628d4f4ef464d6fb4f07d7b6c219144047142.scope - libcontainer container fb6346e88f5a22f3c2fd658f373628d4f4ef464d6fb4f07d7b6c219144047142. Nov 6 23:07:11.483561 systemd[1]: cri-containerd-fb6346e88f5a22f3c2fd658f373628d4f4ef464d6fb4f07d7b6c219144047142.scope: Deactivated successfully. Nov 6 23:07:11.487557 containerd[1481]: time="2025-11-06T23:07:11.487501327Z" level=info msg="StartContainer for \"fb6346e88f5a22f3c2fd658f373628d4f4ef464d6fb4f07d7b6c219144047142\" returns successfully" Nov 6 23:07:11.506195 containerd[1481]: time="2025-11-06T23:07:11.506129714Z" level=info msg="shim disconnected" id=fb6346e88f5a22f3c2fd658f373628d4f4ef464d6fb4f07d7b6c219144047142 namespace=k8s.io Nov 6 23:07:11.506195 containerd[1481]: time="2025-11-06T23:07:11.506191313Z" level=warning msg="cleaning up after shim disconnected" id=fb6346e88f5a22f3c2fd658f373628d4f4ef464d6fb4f07d7b6c219144047142 namespace=k8s.io Nov 6 23:07:11.506195 containerd[1481]: time="2025-11-06T23:07:11.506201313Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 6 23:07:12.148933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb6346e88f5a22f3c2fd658f373628d4f4ef464d6fb4f07d7b6c219144047142-rootfs.mount: Deactivated successfully. Nov 6 23:07:12.261674 kubelet[2585]: E1106 23:07:12.261616 2585 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Nov 6 23:07:12.417145 kubelet[2585]: E1106 23:07:12.415594 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:07:12.419750 containerd[1481]: time="2025-11-06T23:07:12.419652481Z" level=info msg="CreateContainer within sandbox \"5a5c4dc94886181457241adc66134b8bd5dcc44906554ebe0704a19d3b8b6158\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Nov 6 23:07:12.433986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount918903039.mount: Deactivated successfully. Nov 6 23:07:12.437071 containerd[1481]: time="2025-11-06T23:07:12.436938043Z" level=info msg="CreateContainer within sandbox \"5a5c4dc94886181457241adc66134b8bd5dcc44906554ebe0704a19d3b8b6158\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ed58bbd7b5bd04c0698936d8e2ac43f2e62abc0ffca1305a26cd9d7c882b7484\"" Nov 6 23:07:12.439556 containerd[1481]: time="2025-11-06T23:07:12.439522100Z" level=info msg="StartContainer for \"ed58bbd7b5bd04c0698936d8e2ac43f2e62abc0ffca1305a26cd9d7c882b7484\"" Nov 6 23:07:12.469936 systemd[1]: Started cri-containerd-ed58bbd7b5bd04c0698936d8e2ac43f2e62abc0ffca1305a26cd9d7c882b7484.scope - libcontainer container ed58bbd7b5bd04c0698936d8e2ac43f2e62abc0ffca1305a26cd9d7c882b7484. Nov 6 23:07:12.496678 containerd[1481]: time="2025-11-06T23:07:12.496633618Z" level=info msg="StartContainer for \"ed58bbd7b5bd04c0698936d8e2ac43f2e62abc0ffca1305a26cd9d7c882b7484\" returns successfully" Nov 6 23:07:12.743789 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Nov 6 23:07:13.419670 kubelet[2585]: E1106 23:07:13.419623 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:07:14.420350 kubelet[2585]: E1106 23:07:14.420321 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:07:15.589002 systemd-networkd[1401]: lxc_health: Link UP Nov 6 23:07:15.597252 systemd-networkd[1401]: lxc_health: Gained carrier Nov 6 23:07:16.350652 kubelet[2585]: E1106 23:07:16.350603 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:07:16.423872 kubelet[2585]: E1106 23:07:16.423789 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:07:16.427641 kubelet[2585]: I1106 23:07:16.427250 2585 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-646pr" podStartSLOduration=8.427234555 podStartE2EDuration="8.427234555s" podCreationTimestamp="2025-11-06 23:07:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-11-06 23:07:13.434538368 +0000 UTC m=+86.326420664" watchObservedRunningTime="2025-11-06 23:07:16.427234555 +0000 UTC m=+89.319116731" Nov 6 23:07:17.187040 kubelet[2585]: E1106 23:07:17.186631 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:07:17.426032 kubelet[2585]: E1106 23:07:17.426005 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:07:17.426930 systemd-networkd[1401]: lxc_health: Gained IPv6LL Nov 6 23:07:18.776182 systemd[1]: run-containerd-runc-k8s.io-ed58bbd7b5bd04c0698936d8e2ac43f2e62abc0ffca1305a26cd9d7c882b7484-runc.Z6GIF0.mount: Deactivated successfully. Nov 6 23:07:20.964995 sshd[4461]: Connection closed by 10.0.0.1 port 45196 Nov 6 23:07:20.965793 sshd-session[4458]: pam_unix(sshd:session): session closed for user core Nov 6 23:07:20.968587 systemd[1]: sshd@26-10.0.0.7:22-10.0.0.1:45196.service: Deactivated successfully. Nov 6 23:07:20.970692 systemd[1]: session-27.scope: Deactivated successfully. Nov 6 23:07:20.971994 systemd-logind[1468]: Session 27 logged out. Waiting for processes to exit. Nov 6 23:07:20.973110 systemd-logind[1468]: Removed session 27. Nov 6 23:07:21.184530 kubelet[2585]: E1106 23:07:21.184108 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Nov 6 23:07:22.184146 kubelet[2585]: E1106 23:07:22.184114 2585 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"