Jul 12 00:20:08.962460 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 00:20:08.962483 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jul 11 22:42:11 -00 2025 Jul 12 00:20:08.962494 kernel: KASLR enabled Jul 12 00:20:08.962500 kernel: efi: EFI v2.7 by EDK II Jul 12 00:20:08.962506 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdba86018 ACPI 2.0=0xd9710018 RNG=0xd971e498 MEMRESERVE=0xd9b43d18 Jul 12 00:20:08.962512 kernel: random: crng init done Jul 12 00:20:08.962519 kernel: ACPI: Early table checksum verification disabled Jul 12 00:20:08.962525 kernel: ACPI: RSDP 0x00000000D9710018 000024 (v02 BOCHS ) Jul 12 00:20:08.962531 kernel: ACPI: XSDT 0x00000000D971FE98 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 12 00:20:08.962540 kernel: ACPI: FACP 0x00000000D971FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:08.962546 kernel: ACPI: DSDT 0x00000000D9717518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:08.962553 kernel: ACPI: APIC 0x00000000D971FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:08.962559 kernel: ACPI: PPTT 0x00000000D971D898 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:08.962565 kernel: ACPI: GTDT 0x00000000D971E818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:08.962572 kernel: ACPI: MCFG 0x00000000D971E918 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:08.962580 kernel: ACPI: SPCR 0x00000000D971FF98 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:08.962587 kernel: ACPI: DBG2 0x00000000D971E418 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:08.962594 kernel: ACPI: IORT 0x00000000D971E718 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 00:20:08.962600 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 12 00:20:08.962607 kernel: NUMA: Failed to initialise from firmware Jul 12 00:20:08.962613 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:20:08.962620 kernel: NUMA: NODE_DATA [mem 0xdc957800-0xdc95cfff] Jul 12 00:20:08.962626 kernel: Zone ranges: Jul 12 00:20:08.962633 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:20:08.962639 kernel: DMA32 empty Jul 12 00:20:08.962648 kernel: Normal empty Jul 12 00:20:08.962654 kernel: Movable zone start for each node Jul 12 00:20:08.962684 kernel: Early memory node ranges Jul 12 00:20:08.962731 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] Jul 12 00:20:08.962739 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 12 00:20:08.962745 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 12 00:20:08.962752 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 12 00:20:08.962758 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 12 00:20:08.962765 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 12 00:20:08.962771 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 12 00:20:08.962781 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 00:20:08.962788 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 12 00:20:08.962797 kernel: psci: probing for conduit method from ACPI. Jul 12 00:20:08.962804 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 00:20:08.962811 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 00:20:08.962820 kernel: psci: Trusted OS migration not required Jul 12 00:20:08.962827 kernel: psci: SMC Calling Convention v1.1 Jul 12 00:20:08.962836 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 00:20:08.962844 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 12 00:20:08.962853 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 12 00:20:08.962862 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 12 00:20:08.962869 kernel: Detected PIPT I-cache on CPU0 Jul 12 00:20:08.962876 kernel: CPU features: detected: GIC system register CPU interface Jul 12 00:20:08.962883 kernel: CPU features: detected: Hardware dirty bit management Jul 12 00:20:08.962890 kernel: CPU features: detected: Spectre-v4 Jul 12 00:20:08.962896 kernel: CPU features: detected: Spectre-BHB Jul 12 00:20:08.962903 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 00:20:08.962910 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 00:20:08.962918 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 00:20:08.962925 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 00:20:08.962932 kernel: alternatives: applying boot alternatives Jul 12 00:20:08.962969 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:20:08.962980 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 00:20:08.962988 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 00:20:08.962995 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 00:20:08.963002 kernel: Fallback order for Node 0: 0 Jul 12 00:20:08.963009 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 12 00:20:08.963016 kernel: Policy zone: DMA Jul 12 00:20:08.963023 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 00:20:08.963300 kernel: software IO TLB: area num 4. Jul 12 00:20:08.963316 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 12 00:20:08.963324 kernel: Memory: 2386400K/2572288K available (10304K kernel code, 2186K rwdata, 8108K rodata, 39424K init, 897K bss, 185888K reserved, 0K cma-reserved) Jul 12 00:20:08.963331 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 00:20:08.963338 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 00:20:08.963345 kernel: rcu: RCU event tracing is enabled. Jul 12 00:20:08.963353 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 00:20:08.963360 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 00:20:08.963367 kernel: Tracing variant of Tasks RCU enabled. Jul 12 00:20:08.963374 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 00:20:08.963381 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 00:20:08.963388 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 00:20:08.963408 kernel: GICv3: 256 SPIs implemented Jul 12 00:20:08.963415 kernel: GICv3: 0 Extended SPIs implemented Jul 12 00:20:08.963422 kernel: Root IRQ handler: gic_handle_irq Jul 12 00:20:08.963429 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 00:20:08.963436 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 00:20:08.963443 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 00:20:08.963450 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 12 00:20:08.963457 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 12 00:20:08.963464 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 12 00:20:08.963471 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 12 00:20:08.963478 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 00:20:08.963486 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:20:08.963493 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 00:20:08.963500 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 00:20:08.963507 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 00:20:08.963514 kernel: arm-pv: using stolen time PV Jul 12 00:20:08.963521 kernel: Console: colour dummy device 80x25 Jul 12 00:20:08.963529 kernel: ACPI: Core revision 20230628 Jul 12 00:20:08.963536 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 00:20:08.963543 kernel: pid_max: default: 32768 minimum: 301 Jul 12 00:20:08.963586 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 12 00:20:08.963597 kernel: landlock: Up and running. Jul 12 00:20:08.963604 kernel: SELinux: Initializing. Jul 12 00:20:08.963611 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:20:08.963618 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 00:20:08.963625 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:20:08.963632 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 00:20:08.963639 kernel: rcu: Hierarchical SRCU implementation. Jul 12 00:20:08.963646 kernel: rcu: Max phase no-delay instances is 400. Jul 12 00:20:08.963654 kernel: Platform MSI: ITS@0x8080000 domain created Jul 12 00:20:08.963694 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 12 00:20:08.963737 kernel: Remapping and enabling EFI services. Jul 12 00:20:08.963746 kernel: smp: Bringing up secondary CPUs ... Jul 12 00:20:08.963753 kernel: Detected PIPT I-cache on CPU1 Jul 12 00:20:08.963760 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 00:20:08.963767 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 12 00:20:08.963774 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:20:08.963781 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 00:20:08.963788 kernel: Detected PIPT I-cache on CPU2 Jul 12 00:20:08.963795 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 12 00:20:08.963806 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 12 00:20:08.963813 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:20:08.963825 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 12 00:20:08.963833 kernel: Detected PIPT I-cache on CPU3 Jul 12 00:20:08.963875 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 12 00:20:08.963884 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 12 00:20:08.963892 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 00:20:08.963899 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 12 00:20:08.963907 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 00:20:08.963917 kernel: SMP: Total of 4 processors activated. Jul 12 00:20:08.963924 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 00:20:08.963932 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 00:20:08.963939 kernel: CPU features: detected: Common not Private translations Jul 12 00:20:08.963979 kernel: CPU features: detected: CRC32 instructions Jul 12 00:20:08.963987 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 12 00:20:08.963995 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 00:20:08.964002 kernel: CPU features: detected: LSE atomic instructions Jul 12 00:20:08.964012 kernel: CPU features: detected: Privileged Access Never Jul 12 00:20:08.964019 kernel: CPU features: detected: RAS Extension Support Jul 12 00:20:08.964027 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 00:20:08.964034 kernel: CPU: All CPU(s) started at EL1 Jul 12 00:20:08.964041 kernel: alternatives: applying system-wide alternatives Jul 12 00:20:08.964082 kernel: devtmpfs: initialized Jul 12 00:20:08.964091 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 00:20:08.964098 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 00:20:08.964106 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 00:20:08.964117 kernel: SMBIOS 3.0.0 present. Jul 12 00:20:08.964124 kernel: DMI: QEMU KVM Virtual Machine, BIOS edk2-20230524-3.fc38 05/24/2023 Jul 12 00:20:08.964131 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 00:20:08.964139 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 00:20:08.964407 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 00:20:08.964415 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 00:20:08.964423 kernel: audit: initializing netlink subsys (disabled) Jul 12 00:20:08.964430 kernel: audit: type=2000 audit(0.024:1): state=initialized audit_enabled=0 res=1 Jul 12 00:20:08.964437 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 00:20:08.964450 kernel: cpuidle: using governor menu Jul 12 00:20:08.964457 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 00:20:08.964465 kernel: ASID allocator initialised with 32768 entries Jul 12 00:20:08.964472 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 00:20:08.964479 kernel: Serial: AMBA PL011 UART driver Jul 12 00:20:08.964486 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 00:20:08.964494 kernel: Modules: 0 pages in range for non-PLT usage Jul 12 00:20:08.964501 kernel: Modules: 509008 pages in range for PLT usage Jul 12 00:20:08.964509 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 00:20:08.964518 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 00:20:08.964525 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 00:20:08.964533 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 00:20:08.964540 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 00:20:08.964547 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 00:20:08.964554 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 00:20:08.964575 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 00:20:08.964582 kernel: ACPI: Added _OSI(Module Device) Jul 12 00:20:08.964589 kernel: ACPI: Added _OSI(Processor Device) Jul 12 00:20:08.964599 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 00:20:08.964607 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 00:20:08.964614 kernel: ACPI: Interpreter enabled Jul 12 00:20:08.964622 kernel: ACPI: Using GIC for interrupt routing Jul 12 00:20:08.964629 kernel: ACPI: MCFG table detected, 1 entries Jul 12 00:20:08.964637 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 00:20:08.964644 kernel: printk: console [ttyAMA0] enabled Jul 12 00:20:08.964652 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 00:20:08.964873 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 00:20:08.964960 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 00:20:08.965075 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 00:20:08.965199 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 00:20:08.965265 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 00:20:08.965275 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 00:20:08.965282 kernel: PCI host bridge to bus 0000:00 Jul 12 00:20:08.965411 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 00:20:08.965517 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 00:20:08.965578 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 00:20:08.966106 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 00:20:08.966256 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 12 00:20:08.966346 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 12 00:20:08.966427 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 12 00:20:08.966502 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 12 00:20:08.966625 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:20:08.966721 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 00:20:08.966842 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 12 00:20:08.966964 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 12 00:20:08.967034 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 00:20:08.967111 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 00:20:08.967208 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 00:20:08.967219 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 00:20:08.967227 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 00:20:08.967235 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 00:20:08.967243 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 00:20:08.967251 kernel: iommu: Default domain type: Translated Jul 12 00:20:08.967258 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 00:20:08.967266 kernel: efivars: Registered efivars operations Jul 12 00:20:08.967273 kernel: vgaarb: loaded Jul 12 00:20:08.967327 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 00:20:08.967336 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 00:20:08.967344 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 00:20:08.967352 kernel: pnp: PnP ACPI init Jul 12 00:20:08.967455 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 00:20:08.967469 kernel: pnp: PnP ACPI: found 1 devices Jul 12 00:20:08.967476 kernel: NET: Registered PF_INET protocol family Jul 12 00:20:08.967484 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 00:20:08.967495 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 00:20:08.967503 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 00:20:08.967510 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 00:20:08.967518 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 00:20:08.967525 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 00:20:08.967533 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:20:08.967540 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 00:20:08.967548 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 00:20:08.967555 kernel: PCI: CLS 0 bytes, default 64 Jul 12 00:20:08.967564 kernel: kvm [1]: HYP mode not available Jul 12 00:20:08.967571 kernel: Initialise system trusted keyrings Jul 12 00:20:08.967579 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 00:20:08.967586 kernel: Key type asymmetric registered Jul 12 00:20:08.967637 kernel: Asymmetric key parser 'x509' registered Jul 12 00:20:08.967646 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 12 00:20:08.967654 kernel: io scheduler mq-deadline registered Jul 12 00:20:08.967714 kernel: io scheduler kyber registered Jul 12 00:20:08.967724 kernel: io scheduler bfq registered Jul 12 00:20:08.967736 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 00:20:08.967744 kernel: ACPI: button: Power Button [PWRB] Jul 12 00:20:08.967751 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 00:20:08.967852 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 12 00:20:08.967864 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 00:20:08.967871 kernel: thunder_xcv, ver 1.0 Jul 12 00:20:08.967879 kernel: thunder_bgx, ver 1.0 Jul 12 00:20:08.967886 kernel: nicpf, ver 1.0 Jul 12 00:20:08.967893 kernel: nicvf, ver 1.0 Jul 12 00:20:08.967975 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 00:20:08.968101 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T00:20:08 UTC (1752279608) Jul 12 00:20:08.968115 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 00:20:08.968123 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 12 00:20:08.968131 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 12 00:20:08.968138 kernel: watchdog: Hard watchdog permanently disabled Jul 12 00:20:08.968146 kernel: NET: Registered PF_INET6 protocol family Jul 12 00:20:08.968153 kernel: Segment Routing with IPv6 Jul 12 00:20:08.968164 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 00:20:08.968172 kernel: NET: Registered PF_PACKET protocol family Jul 12 00:20:08.968180 kernel: Key type dns_resolver registered Jul 12 00:20:08.968187 kernel: registered taskstats version 1 Jul 12 00:20:08.968195 kernel: Loading compiled-in X.509 certificates Jul 12 00:20:08.968202 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ed6b382df707adbd5942eaa048a1031fe26cbf15' Jul 12 00:20:08.968210 kernel: Key type .fscrypt registered Jul 12 00:20:08.968217 kernel: Key type fscrypt-provisioning registered Jul 12 00:20:08.968225 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 00:20:08.968234 kernel: ima: Allocated hash algorithm: sha1 Jul 12 00:20:08.968241 kernel: ima: No architecture policies found Jul 12 00:20:08.968248 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 00:20:08.968256 kernel: clk: Disabling unused clocks Jul 12 00:20:08.968263 kernel: Freeing unused kernel memory: 39424K Jul 12 00:20:08.968271 kernel: Run /init as init process Jul 12 00:20:08.968278 kernel: with arguments: Jul 12 00:20:08.968286 kernel: /init Jul 12 00:20:08.968332 kernel: with environment: Jul 12 00:20:08.968344 kernel: HOME=/ Jul 12 00:20:08.968352 kernel: TERM=linux Jul 12 00:20:08.968359 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 00:20:08.968368 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:20:08.968378 systemd[1]: Detected virtualization kvm. Jul 12 00:20:08.968386 systemd[1]: Detected architecture arm64. Jul 12 00:20:08.968400 systemd[1]: Running in initrd. Jul 12 00:20:08.968410 systemd[1]: No hostname configured, using default hostname. Jul 12 00:20:08.968417 systemd[1]: Hostname set to . Jul 12 00:20:08.968426 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:20:08.968434 systemd[1]: Queued start job for default target initrd.target. Jul 12 00:20:08.968442 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:20:08.968450 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:20:08.968458 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 00:20:08.968466 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:20:08.968476 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 00:20:08.968484 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 00:20:08.968493 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 00:20:08.968501 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 00:20:08.968543 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:20:08.968551 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:20:08.968559 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:20:08.968569 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:20:08.968577 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:20:08.968925 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:20:08.968947 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:20:08.968956 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:20:08.968964 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 00:20:08.968972 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 12 00:20:08.968980 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:20:08.968989 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:20:08.969002 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:20:08.969010 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:20:08.969018 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 00:20:08.969026 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:20:08.969034 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 00:20:08.969042 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 00:20:08.969050 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:20:08.969058 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:20:08.969067 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:20:08.969076 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 00:20:08.969083 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:20:08.969091 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 00:20:08.969137 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 00:20:08.969179 systemd-journald[238]: Collecting audit messages is disabled. Jul 12 00:20:08.969200 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:20:08.969208 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 00:20:08.969216 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:20:08.969225 kernel: Bridge firewalling registered Jul 12 00:20:08.969233 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 00:20:08.969242 systemd-journald[238]: Journal started Jul 12 00:20:08.969260 systemd-journald[238]: Runtime Journal (/run/log/journal/223c7a599beb4ff2a91b5c9faf918309) is 5.9M, max 47.3M, 41.4M free. Jul 12 00:20:08.952707 systemd-modules-load[239]: Inserted module 'overlay' Jul 12 00:20:08.972444 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:20:08.967747 systemd-modules-load[239]: Inserted module 'br_netfilter' Jul 12 00:20:08.973640 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:20:08.977658 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:20:08.981817 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:20:08.985043 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:20:08.991992 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:20:08.993495 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:20:08.995605 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:20:08.997718 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:20:09.006839 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 00:20:09.009120 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:20:09.019211 dracut-cmdline[275]: dracut-dracut-053 Jul 12 00:20:09.022385 dracut-cmdline[275]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=52e0eba0325ad9e58f7b221f0132165c94b480ebf93a398f4fe935660ba9e15c Jul 12 00:20:09.038650 systemd-resolved[276]: Positive Trust Anchors: Jul 12 00:20:09.038683 systemd-resolved[276]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:20:09.038716 systemd-resolved[276]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:20:09.046706 systemd-resolved[276]: Defaulting to hostname 'linux'. Jul 12 00:20:09.048223 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:20:09.049755 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:20:09.100697 kernel: SCSI subsystem initialized Jul 12 00:20:09.105681 kernel: Loading iSCSI transport class v2.0-870. Jul 12 00:20:09.114706 kernel: iscsi: registered transport (tcp) Jul 12 00:20:09.130028 kernel: iscsi: registered transport (qla4xxx) Jul 12 00:20:09.130056 kernel: QLogic iSCSI HBA Driver Jul 12 00:20:09.178127 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 00:20:09.189818 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 00:20:09.207253 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 00:20:09.207317 kernel: device-mapper: uevent: version 1.0.3 Jul 12 00:20:09.208313 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 12 00:20:09.256700 kernel: raid6: neonx8 gen() 15306 MB/s Jul 12 00:20:09.273730 kernel: raid6: neonx4 gen() 15562 MB/s Jul 12 00:20:09.290725 kernel: raid6: neonx2 gen() 13233 MB/s Jul 12 00:20:09.307720 kernel: raid6: neonx1 gen() 10352 MB/s Jul 12 00:20:09.324691 kernel: raid6: int64x8 gen() 6946 MB/s Jul 12 00:20:09.341696 kernel: raid6: int64x4 gen() 7321 MB/s Jul 12 00:20:09.358711 kernel: raid6: int64x2 gen() 5963 MB/s Jul 12 00:20:09.375902 kernel: raid6: int64x1 gen() 4835 MB/s Jul 12 00:20:09.375938 kernel: raid6: using algorithm neonx4 gen() 15562 MB/s Jul 12 00:20:09.393825 kernel: raid6: .... xor() 12224 MB/s, rmw enabled Jul 12 00:20:09.393847 kernel: raid6: using neon recovery algorithm Jul 12 00:20:09.399879 kernel: xor: measuring software checksum speed Jul 12 00:20:09.399898 kernel: 8regs : 19712 MB/sec Jul 12 00:20:09.400686 kernel: 32regs : 19669 MB/sec Jul 12 00:20:09.400700 kernel: arm64_neon : 23111 MB/sec Jul 12 00:20:09.401830 kernel: xor: using function: arm64_neon (23111 MB/sec) Jul 12 00:20:09.453700 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 00:20:09.465943 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:20:09.474855 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:20:09.486327 systemd-udevd[459]: Using default interface naming scheme 'v255'. Jul 12 00:20:09.489481 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:20:09.498858 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 00:20:09.510659 dracut-pre-trigger[466]: rd.md=0: removing MD RAID activation Jul 12 00:20:09.537856 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:20:09.549852 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:20:09.589452 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:20:09.600454 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 00:20:09.610076 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 00:20:09.612067 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:20:09.614112 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:20:09.616645 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:20:09.625868 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 00:20:09.638171 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:20:09.648272 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 12 00:20:09.648459 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 00:20:09.655881 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 00:20:09.655920 kernel: GPT:9289727 != 19775487 Jul 12 00:20:09.655931 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 00:20:09.655941 kernel: GPT:9289727 != 19775487 Jul 12 00:20:09.655950 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 00:20:09.655960 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:20:09.653447 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:20:09.653538 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:20:09.659208 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:20:09.662460 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:20:09.662518 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:20:09.671823 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by (udev-worker) (517) Jul 12 00:20:09.666783 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:20:09.675766 kernel: BTRFS: device fsid 394cecf3-1fd4-438a-991e-dc2b4121da0c devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (505) Jul 12 00:20:09.682858 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:20:09.692325 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 12 00:20:09.697727 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:20:09.706090 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 12 00:20:09.713980 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:20:09.718319 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 12 00:20:09.719609 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 12 00:20:09.727827 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 00:20:09.729692 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 00:20:09.735408 disk-uuid[547]: Primary Header is updated. Jul 12 00:20:09.735408 disk-uuid[547]: Secondary Entries is updated. Jul 12 00:20:09.735408 disk-uuid[547]: Secondary Header is updated. Jul 12 00:20:09.739693 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:20:09.749264 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:20:10.751119 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 00:20:10.751206 disk-uuid[549]: The operation has completed successfully. Jul 12 00:20:10.778280 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 00:20:10.778389 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 00:20:10.793864 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 00:20:10.796675 sh[571]: Success Jul 12 00:20:10.813680 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 12 00:20:10.852220 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 00:20:10.854047 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 00:20:10.855132 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 00:20:10.867079 kernel: BTRFS info (device dm-0): first mount of filesystem 394cecf3-1fd4-438a-991e-dc2b4121da0c Jul 12 00:20:10.867124 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:20:10.867146 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 12 00:20:10.867936 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 12 00:20:10.868683 kernel: BTRFS info (device dm-0): using free space tree Jul 12 00:20:10.872152 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 00:20:10.873523 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 00:20:10.874341 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 00:20:10.877090 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 00:20:10.888255 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:20:10.888288 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:20:10.888299 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:20:10.890744 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:20:10.897557 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 12 00:20:10.899314 kernel: BTRFS info (device vda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:20:10.904967 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 00:20:10.910823 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 00:20:10.974921 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:20:10.998221 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:20:11.005752 ignition[666]: Ignition 2.19.0 Jul 12 00:20:11.005762 ignition[666]: Stage: fetch-offline Jul 12 00:20:11.005800 ignition[666]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:20:11.005809 ignition[666]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:20:11.005995 ignition[666]: parsed url from cmdline: "" Jul 12 00:20:11.005998 ignition[666]: no config URL provided Jul 12 00:20:11.006003 ignition[666]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 00:20:11.006011 ignition[666]: no config at "/usr/lib/ignition/user.ign" Jul 12 00:20:11.006035 ignition[666]: op(1): [started] loading QEMU firmware config module Jul 12 00:20:11.006039 ignition[666]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 00:20:11.012904 ignition[666]: op(1): [finished] loading QEMU firmware config module Jul 12 00:20:11.021794 systemd-networkd[762]: lo: Link UP Jul 12 00:20:11.021802 systemd-networkd[762]: lo: Gained carrier Jul 12 00:20:11.022486 systemd-networkd[762]: Enumeration completed Jul 12 00:20:11.022592 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:20:11.022905 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:20:11.022909 systemd-networkd[762]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:20:11.023997 systemd-networkd[762]: eth0: Link UP Jul 12 00:20:11.024000 systemd-networkd[762]: eth0: Gained carrier Jul 12 00:20:11.024008 systemd-networkd[762]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:20:11.024271 systemd[1]: Reached target network.target - Network. Jul 12 00:20:11.046712 systemd-networkd[762]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:20:11.061516 ignition[666]: parsing config with SHA512: 3f95f20aa0e028cbfd1d9cfd44dedae970dce8571def33a6433352c273d1ccfd142acf0b3a4c1225f26b35d0df6fa3c5941eac09f8ee8ab049d45a5d65fc5dca Jul 12 00:20:11.068225 unknown[666]: fetched base config from "system" Jul 12 00:20:11.068235 unknown[666]: fetched user config from "qemu" Jul 12 00:20:11.068693 ignition[666]: fetch-offline: fetch-offline passed Jul 12 00:20:11.070265 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:20:11.068761 ignition[666]: Ignition finished successfully Jul 12 00:20:11.072114 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 00:20:11.075848 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 00:20:11.087267 ignition[769]: Ignition 2.19.0 Jul 12 00:20:11.087277 ignition[769]: Stage: kargs Jul 12 00:20:11.087453 ignition[769]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:20:11.087463 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:20:11.091389 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 00:20:11.088367 ignition[769]: kargs: kargs passed Jul 12 00:20:11.088426 ignition[769]: Ignition finished successfully Jul 12 00:20:11.106896 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 00:20:11.116476 ignition[777]: Ignition 2.19.0 Jul 12 00:20:11.116487 ignition[777]: Stage: disks Jul 12 00:20:11.116642 ignition[777]: no configs at "/usr/lib/ignition/base.d" Jul 12 00:20:11.116652 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:20:11.117544 ignition[777]: disks: disks passed Jul 12 00:20:11.120281 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 00:20:11.117589 ignition[777]: Ignition finished successfully Jul 12 00:20:11.121529 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 00:20:11.122925 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 00:20:11.124830 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:20:11.126361 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:20:11.128197 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:20:11.140810 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 00:20:11.150229 systemd-fsck[788]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 12 00:20:11.153279 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 00:20:11.156042 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 00:20:11.200515 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 00:20:11.201989 kernel: EXT4-fs (vda9): mounted filesystem 44c8362f-9431-4909-bc9a-f90e514bd0e9 r/w with ordered data mode. Quota mode: none. Jul 12 00:20:11.201778 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 00:20:11.214739 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:20:11.217976 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 00:20:11.219001 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 00:20:11.219042 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 00:20:11.219064 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:20:11.225640 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 00:20:11.227852 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 00:20:11.232761 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (796) Jul 12 00:20:11.232784 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:20:11.232795 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:20:11.232804 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:20:11.235683 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:20:11.237094 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:20:11.275566 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 00:20:11.279954 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory Jul 12 00:20:11.284302 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 00:20:11.288195 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 00:20:11.354597 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 00:20:11.365762 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 00:20:11.367284 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 00:20:11.373704 kernel: BTRFS info (device vda6): last unmount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:20:11.389240 ignition[911]: INFO : Ignition 2.19.0 Jul 12 00:20:11.389240 ignition[911]: INFO : Stage: mount Jul 12 00:20:11.392416 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:20:11.392416 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:20:11.392416 ignition[911]: INFO : mount: mount passed Jul 12 00:20:11.392416 ignition[911]: INFO : Ignition finished successfully Jul 12 00:20:11.389780 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 00:20:11.392073 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 00:20:11.400759 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 00:20:11.865084 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 00:20:11.874923 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 00:20:11.881524 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (924) Jul 12 00:20:11.881561 kernel: BTRFS info (device vda6): first mount of filesystem 2ba3179f-4493-4560-9191-8e514f82bd95 Jul 12 00:20:11.881573 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 00:20:11.883084 kernel: BTRFS info (device vda6): using free space tree Jul 12 00:20:11.885682 kernel: BTRFS info (device vda6): auto enabling async discard Jul 12 00:20:11.886309 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 00:20:11.902687 ignition[941]: INFO : Ignition 2.19.0 Jul 12 00:20:11.902687 ignition[941]: INFO : Stage: files Jul 12 00:20:11.902687 ignition[941]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:20:11.902687 ignition[941]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:20:11.906837 ignition[941]: DEBUG : files: compiled without relabeling support, skipping Jul 12 00:20:11.906837 ignition[941]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 00:20:11.906837 ignition[941]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 00:20:11.906837 ignition[941]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 00:20:11.911958 ignition[941]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 00:20:11.911958 ignition[941]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 00:20:11.911958 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:20:11.911958 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jul 12 00:20:11.907211 unknown[941]: wrote ssh authorized keys file for user: core Jul 12 00:20:11.962784 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 00:20:12.212159 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jul 12 00:20:12.212159 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:20:12.215906 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 12 00:20:12.384910 systemd-networkd[762]: eth0: Gained IPv6LL Jul 12 00:20:12.416540 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 00:20:12.524176 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 00:20:12.524176 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 12 00:20:12.527865 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 00:20:12.527865 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:20:12.527865 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 00:20:12.527865 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:20:12.527865 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 00:20:12.527865 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:20:12.527865 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 00:20:12.527865 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:20:12.527865 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 00:20:12.527865 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:20:12.527865 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:20:12.527865 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:20:12.527865 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jul 12 00:20:13.094585 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 12 00:20:13.549525 ignition[941]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jul 12 00:20:13.549525 ignition[941]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 12 00:20:13.553006 ignition[941]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:20:13.553006 ignition[941]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 00:20:13.553006 ignition[941]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 12 00:20:13.553006 ignition[941]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 12 00:20:13.553006 ignition[941]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:20:13.553006 ignition[941]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 00:20:13.553006 ignition[941]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 12 00:20:13.553006 ignition[941]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 00:20:13.575518 ignition[941]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:20:13.579696 ignition[941]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 00:20:13.582454 ignition[941]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 00:20:13.582454 ignition[941]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 12 00:20:13.582454 ignition[941]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 00:20:13.582454 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:20:13.582454 ignition[941]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 00:20:13.582454 ignition[941]: INFO : files: files passed Jul 12 00:20:13.582454 ignition[941]: INFO : Ignition finished successfully Jul 12 00:20:13.584462 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 00:20:13.597800 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 00:20:13.600453 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 00:20:13.601938 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 00:20:13.602017 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 00:20:13.608394 initrd-setup-root-after-ignition[969]: grep: /sysroot/oem/oem-release: No such file or directory Jul 12 00:20:13.612048 initrd-setup-root-after-ignition[971]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:20:13.612048 initrd-setup-root-after-ignition[971]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:20:13.615306 initrd-setup-root-after-ignition[975]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 00:20:13.616953 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:20:13.618603 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 00:20:13.628810 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 00:20:13.650515 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 00:20:13.650651 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 00:20:13.652854 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 00:20:13.654617 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 00:20:13.656370 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 00:20:13.657187 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 00:20:13.672705 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:20:13.683844 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 00:20:13.691810 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:20:13.693057 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:20:13.695049 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 00:20:13.696751 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 00:20:13.696876 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 00:20:13.699325 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 00:20:13.701265 systemd[1]: Stopped target basic.target - Basic System. Jul 12 00:20:13.702858 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 00:20:13.704512 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 00:20:13.706398 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 00:20:13.708293 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 00:20:13.710067 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 00:20:13.711919 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 00:20:13.713777 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 00:20:13.715486 systemd[1]: Stopped target swap.target - Swaps. Jul 12 00:20:13.716955 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 00:20:13.717077 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 00:20:13.719330 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:20:13.721223 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:20:13.723063 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 00:20:13.727723 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:20:13.728929 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 00:20:13.729049 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 00:20:13.731730 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 00:20:13.731849 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 00:20:13.733798 systemd[1]: Stopped target paths.target - Path Units. Jul 12 00:20:13.735371 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 00:20:13.736766 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:20:13.738255 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 00:20:13.740033 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 00:20:13.742135 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 00:20:13.742267 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 00:20:13.743906 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 00:20:13.744040 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 00:20:13.745550 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 00:20:13.745729 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 00:20:13.747380 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 00:20:13.747538 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 00:20:13.759909 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 00:20:13.761633 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 00:20:13.761837 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:20:13.765064 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 00:20:13.765907 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 00:20:13.766051 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:20:13.767974 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 00:20:13.768084 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 00:20:13.772676 ignition[996]: INFO : Ignition 2.19.0 Jul 12 00:20:13.772676 ignition[996]: INFO : Stage: umount Jul 12 00:20:13.772676 ignition[996]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 00:20:13.772676 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 00:20:13.778249 ignition[996]: INFO : umount: umount passed Jul 12 00:20:13.778249 ignition[996]: INFO : Ignition finished successfully Jul 12 00:20:13.773719 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 00:20:13.774975 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 00:20:13.777408 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 00:20:13.777487 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 00:20:13.779977 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 00:20:13.781974 systemd[1]: Stopped target network.target - Network. Jul 12 00:20:13.783868 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 00:20:13.783936 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 00:20:13.785549 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 00:20:13.785596 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 00:20:13.787389 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 00:20:13.787434 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 00:20:13.789026 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 00:20:13.789073 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 00:20:13.790950 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 00:20:13.792596 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 00:20:13.799731 systemd-networkd[762]: eth0: DHCPv6 lease lost Jul 12 00:20:13.802801 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 00:20:13.803829 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 00:20:13.805180 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 00:20:13.805273 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 00:20:13.807988 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 00:20:13.808034 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:20:13.822756 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 00:20:13.823600 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 00:20:13.823680 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 00:20:13.825841 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:20:13.825886 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:20:13.827727 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 00:20:13.827772 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 00:20:13.829966 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 00:20:13.830014 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:20:13.831953 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:20:13.843461 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 00:20:13.843608 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:20:13.847113 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 00:20:13.847197 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 00:20:13.849294 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 00:20:13.849388 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 00:20:13.851165 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 00:20:13.851199 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:20:13.852858 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 00:20:13.852905 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 00:20:13.855552 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 00:20:13.855595 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 00:20:13.858249 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 00:20:13.858292 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 00:20:13.873824 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 00:20:13.874817 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 00:20:13.874874 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:20:13.876915 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 00:20:13.876959 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:20:13.879038 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 00:20:13.879732 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 00:20:13.881063 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 00:20:13.881140 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 00:20:13.882814 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 00:20:13.882902 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 00:20:13.886017 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 00:20:13.888254 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 00:20:13.897117 systemd[1]: Switching root. Jul 12 00:20:13.919768 systemd-journald[238]: Journal stopped Jul 12 00:20:14.658658 systemd-journald[238]: Received SIGTERM from PID 1 (systemd). Jul 12 00:20:14.658729 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 00:20:14.658742 kernel: SELinux: policy capability open_perms=1 Jul 12 00:20:14.658753 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 00:20:14.658763 kernel: SELinux: policy capability always_check_network=0 Jul 12 00:20:14.658772 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 00:20:14.658782 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 00:20:14.658806 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 00:20:14.658816 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 00:20:14.658826 kernel: audit: type=1403 audit(1752279614.086:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 00:20:14.658839 systemd[1]: Successfully loaded SELinux policy in 32.895ms. Jul 12 00:20:14.658860 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.476ms. Jul 12 00:20:14.658872 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 12 00:20:14.658884 systemd[1]: Detected virtualization kvm. Jul 12 00:20:14.658894 systemd[1]: Detected architecture arm64. Jul 12 00:20:14.658905 systemd[1]: Detected first boot. Jul 12 00:20:14.658915 systemd[1]: Initializing machine ID from VM UUID. Jul 12 00:20:14.658926 zram_generator::config[1040]: No configuration found. Jul 12 00:20:14.658942 systemd[1]: Populated /etc with preset unit settings. Jul 12 00:20:14.658953 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 00:20:14.658965 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 00:20:14.658976 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 00:20:14.658987 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 00:20:14.658998 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 00:20:14.659009 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 00:20:14.659019 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 00:20:14.659030 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 00:20:14.659043 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 00:20:14.659054 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 00:20:14.659065 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 00:20:14.659076 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 00:20:14.659087 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 00:20:14.659097 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 00:20:14.659108 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 00:20:14.659119 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 00:20:14.659131 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 00:20:14.659142 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 00:20:14.659152 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 00:20:14.659163 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 00:20:14.659175 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 00:20:14.659186 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 00:20:14.659196 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 00:20:14.659207 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 00:20:14.659220 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 00:20:14.659231 systemd[1]: Reached target slices.target - Slice Units. Jul 12 00:20:14.659241 systemd[1]: Reached target swap.target - Swaps. Jul 12 00:20:14.659257 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 00:20:14.659267 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 00:20:14.659278 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 00:20:14.659289 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 00:20:14.659300 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 00:20:14.659311 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 00:20:14.659323 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 00:20:14.659334 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 00:20:14.659344 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 00:20:14.659355 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 00:20:14.659373 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 00:20:14.659384 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 00:20:14.659395 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 00:20:14.659410 systemd[1]: Reached target machines.target - Containers. Jul 12 00:20:14.659422 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 00:20:14.659435 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:20:14.659445 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 00:20:14.659456 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 00:20:14.659467 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:20:14.659477 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:20:14.659488 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:20:14.659498 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 00:20:14.659509 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:20:14.659522 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 00:20:14.659533 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 00:20:14.659543 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 00:20:14.659554 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 00:20:14.659565 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 00:20:14.659575 kernel: loop: module loaded Jul 12 00:20:14.659585 kernel: fuse: init (API version 7.39) Jul 12 00:20:14.659594 kernel: ACPI: bus type drm_connector registered Jul 12 00:20:14.659604 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 00:20:14.659617 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 00:20:14.659628 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 00:20:14.659638 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 00:20:14.659689 systemd-journald[1107]: Collecting audit messages is disabled. Jul 12 00:20:14.659711 systemd-journald[1107]: Journal started Jul 12 00:20:14.659734 systemd-journald[1107]: Runtime Journal (/run/log/journal/223c7a599beb4ff2a91b5c9faf918309) is 5.9M, max 47.3M, 41.4M free. Jul 12 00:20:14.443381 systemd[1]: Queued start job for default target multi-user.target. Jul 12 00:20:14.473557 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 12 00:20:14.473977 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 00:20:14.669292 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 00:20:14.671823 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 00:20:14.671874 systemd[1]: Stopped verity-setup.service. Jul 12 00:20:14.675985 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 00:20:14.676730 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 00:20:14.678062 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 00:20:14.679590 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 00:20:14.680810 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 00:20:14.682235 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 00:20:14.683684 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 00:20:14.685726 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 00:20:14.687205 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 00:20:14.688820 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 00:20:14.688969 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 00:20:14.690493 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:20:14.690628 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:20:14.692125 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:20:14.692272 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:20:14.693727 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:20:14.693868 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:20:14.695585 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 00:20:14.695755 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 00:20:14.697152 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:20:14.697301 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:20:14.698966 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 00:20:14.700544 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 00:20:14.702287 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 00:20:14.715607 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 00:20:14.726799 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 00:20:14.729187 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 00:20:14.730437 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 00:20:14.730486 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 00:20:14.732742 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 12 00:20:14.735085 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 00:20:14.737401 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 00:20:14.738569 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:20:14.741645 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 00:20:14.746845 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 00:20:14.748143 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:20:14.751848 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 00:20:14.754015 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:20:14.757769 systemd-journald[1107]: Time spent on flushing to /var/log/journal/223c7a599beb4ff2a91b5c9faf918309 is 19.204ms for 854 entries. Jul 12 00:20:14.757769 systemd-journald[1107]: System Journal (/var/log/journal/223c7a599beb4ff2a91b5c9faf918309) is 8.0M, max 195.6M, 187.6M free. Jul 12 00:20:14.793965 systemd-journald[1107]: Received client request to flush runtime journal. Jul 12 00:20:14.757853 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:20:14.761616 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 00:20:14.765055 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 00:20:14.770092 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 00:20:14.772226 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 00:20:14.773659 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 00:20:14.776696 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 00:20:14.779087 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 00:20:14.785650 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:20:14.787406 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 00:20:14.790870 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 12 00:20:14.794841 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 12 00:20:14.798344 kernel: loop0: detected capacity change from 0 to 114328 Jul 12 00:20:14.797273 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 00:20:14.815800 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 00:20:14.822583 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 00:20:14.823265 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 12 00:20:14.828186 udevadm[1161]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 12 00:20:14.832998 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 00:20:14.852989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 00:20:14.858991 kernel: loop1: detected capacity change from 0 to 114432 Jul 12 00:20:14.875219 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jul 12 00:20:14.875243 systemd-tmpfiles[1170]: ACLs are not supported, ignoring. Jul 12 00:20:14.879829 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 00:20:14.916756 kernel: loop2: detected capacity change from 0 to 207008 Jul 12 00:20:14.955694 kernel: loop3: detected capacity change from 0 to 114328 Jul 12 00:20:14.961696 kernel: loop4: detected capacity change from 0 to 114432 Jul 12 00:20:14.965690 kernel: loop5: detected capacity change from 0 to 207008 Jul 12 00:20:14.969991 (sd-merge)[1175]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 12 00:20:14.970380 (sd-merge)[1175]: Merged extensions into '/usr'. Jul 12 00:20:14.974800 systemd[1]: Reloading requested from client PID 1151 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 00:20:14.975021 systemd[1]: Reloading... Jul 12 00:20:15.043270 zram_generator::config[1202]: No configuration found. Jul 12 00:20:15.061319 ldconfig[1146]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 00:20:15.134984 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:20:15.171255 systemd[1]: Reloading finished in 195 ms. Jul 12 00:20:15.201393 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 00:20:15.205649 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 00:20:15.220992 systemd[1]: Starting ensure-sysext.service... Jul 12 00:20:15.223171 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 00:20:15.229708 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 00:20:15.234443 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 00:20:15.235994 systemd[1]: Reloading requested from client PID 1236 ('systemctl') (unit ensure-sysext.service)... Jul 12 00:20:15.236011 systemd[1]: Reloading... Jul 12 00:20:15.241902 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 00:20:15.242195 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 00:20:15.242902 systemd-tmpfiles[1238]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 00:20:15.243122 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jul 12 00:20:15.243179 systemd-tmpfiles[1238]: ACLs are not supported, ignoring. Jul 12 00:20:15.245701 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:20:15.245714 systemd-tmpfiles[1238]: Skipping /boot Jul 12 00:20:15.253129 systemd-tmpfiles[1238]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 00:20:15.253147 systemd-tmpfiles[1238]: Skipping /boot Jul 12 00:20:15.265832 systemd-udevd[1241]: Using default interface naming scheme 'v255'. Jul 12 00:20:15.291700 zram_generator::config[1266]: No configuration found. Jul 12 00:20:15.337714 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1293) Jul 12 00:20:15.402761 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:20:15.449231 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 12 00:20:15.449411 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 00:20:15.451110 systemd[1]: Reloading finished in 214 ms. Jul 12 00:20:15.467864 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 00:20:15.480040 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 00:20:15.501748 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 12 00:20:15.507867 systemd[1]: Finished ensure-sysext.service. Jul 12 00:20:15.535855 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:20:15.538519 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 00:20:15.539845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 00:20:15.540957 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 12 00:20:15.545720 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 00:20:15.550844 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 00:20:15.554119 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 00:20:15.556347 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 00:20:15.557586 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 00:20:15.562523 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 00:20:15.565943 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 00:20:15.566679 lvm[1334]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:20:15.575848 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 00:20:15.582883 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 00:20:15.586730 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 00:20:15.589751 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 00:20:15.594346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 00:20:15.596380 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 12 00:20:15.598062 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 00:20:15.599704 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 00:20:15.602238 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 00:20:15.602376 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 00:20:15.603931 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 00:20:15.604071 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 00:20:15.606054 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 00:20:15.606242 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 00:20:15.608528 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 00:20:15.610934 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 00:20:15.613387 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 00:20:15.621562 augenrules[1361]: No rules Jul 12 00:20:15.623179 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:20:15.628758 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 00:20:15.631027 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 00:20:15.643930 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 12 00:20:15.645187 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 00:20:15.645281 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 00:20:15.646541 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 00:20:15.649889 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 00:20:15.652853 lvm[1377]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 12 00:20:15.650889 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 00:20:15.661141 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 00:20:15.662776 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 00:20:15.677200 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 12 00:20:15.686396 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 00:20:15.744710 systemd-networkd[1347]: lo: Link UP Jul 12 00:20:15.745012 systemd-networkd[1347]: lo: Gained carrier Jul 12 00:20:15.745827 systemd-networkd[1347]: Enumeration completed Jul 12 00:20:15.746695 systemd-networkd[1347]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:20:15.746765 systemd-networkd[1347]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 00:20:15.746971 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 00:20:15.747608 systemd-networkd[1347]: eth0: Link UP Jul 12 00:20:15.747756 systemd-networkd[1347]: eth0: Gained carrier Jul 12 00:20:15.747821 systemd-networkd[1347]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 00:20:15.748646 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 00:20:15.750007 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 00:20:15.753805 systemd-resolved[1352]: Positive Trust Anchors: Jul 12 00:20:15.753821 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 00:20:15.753854 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 00:20:15.759724 systemd-networkd[1347]: eth0: DHCPv4 address 10.0.0.97/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 00:20:15.759855 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 00:20:15.760879 systemd-timesyncd[1354]: Network configuration changed, trying to establish connection. Jul 12 00:20:15.761879 systemd-timesyncd[1354]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 00:20:15.761940 systemd-timesyncd[1354]: Initial clock synchronization to Sat 2025-07-12 00:20:15.715943 UTC. Jul 12 00:20:15.762537 systemd-resolved[1352]: Defaulting to hostname 'linux'. Jul 12 00:20:15.764252 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 00:20:15.765500 systemd[1]: Reached target network.target - Network. Jul 12 00:20:15.766537 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 00:20:15.767917 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 00:20:15.769151 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 00:20:15.770514 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 00:20:15.771959 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 00:20:15.773212 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 00:20:15.774562 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 00:20:15.775880 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 00:20:15.775923 systemd[1]: Reached target paths.target - Path Units. Jul 12 00:20:15.776894 systemd[1]: Reached target timers.target - Timer Units. Jul 12 00:20:15.778433 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 00:20:15.781035 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 00:20:15.791950 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 00:20:15.794055 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 00:20:15.795387 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 00:20:15.796457 systemd[1]: Reached target basic.target - Basic System. Jul 12 00:20:15.797566 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:20:15.797598 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 00:20:15.798966 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 00:20:15.801097 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 00:20:15.803101 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 00:20:15.805933 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 00:20:15.807098 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 00:20:15.809935 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 00:20:15.813600 jq[1398]: false Jul 12 00:20:15.815229 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 00:20:15.820523 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 00:20:15.824138 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 00:20:15.824988 extend-filesystems[1399]: Found loop3 Jul 12 00:20:15.827239 extend-filesystems[1399]: Found loop4 Jul 12 00:20:15.827239 extend-filesystems[1399]: Found loop5 Jul 12 00:20:15.827239 extend-filesystems[1399]: Found vda Jul 12 00:20:15.827239 extend-filesystems[1399]: Found vda1 Jul 12 00:20:15.827239 extend-filesystems[1399]: Found vda2 Jul 12 00:20:15.827239 extend-filesystems[1399]: Found vda3 Jul 12 00:20:15.827239 extend-filesystems[1399]: Found usr Jul 12 00:20:15.827239 extend-filesystems[1399]: Found vda4 Jul 12 00:20:15.827239 extend-filesystems[1399]: Found vda6 Jul 12 00:20:15.827239 extend-filesystems[1399]: Found vda7 Jul 12 00:20:15.827239 extend-filesystems[1399]: Found vda9 Jul 12 00:20:15.827239 extend-filesystems[1399]: Checking size of /dev/vda9 Jul 12 00:20:15.829185 dbus-daemon[1397]: [system] SELinux support is enabled Jul 12 00:20:15.829781 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 00:20:15.831533 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 00:20:15.831988 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 00:20:15.849158 jq[1413]: true Jul 12 00:20:15.832940 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 00:20:15.835888 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 00:20:15.837602 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 00:20:15.846509 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 00:20:15.846704 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 00:20:15.847007 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 00:20:15.847152 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 00:20:15.853006 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 00:20:15.853181 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 00:20:15.855965 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1290) Jul 12 00:20:15.869010 extend-filesystems[1399]: Resized partition /dev/vda9 Jul 12 00:20:15.877782 extend-filesystems[1429]: resize2fs 1.47.1 (20-May-2024) Jul 12 00:20:15.885027 (ntainerd)[1428]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 00:20:15.890183 jq[1422]: true Jul 12 00:20:15.890679 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 00:20:15.901981 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 00:20:15.902023 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 00:20:15.903505 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 00:20:15.903520 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 00:20:15.913167 tar[1421]: linux-arm64/LICENSE Jul 12 00:20:15.913418 tar[1421]: linux-arm64/helm Jul 12 00:20:15.919779 update_engine[1412]: I20250712 00:20:15.919000 1412 main.cc:92] Flatcar Update Engine starting Jul 12 00:20:15.922365 update_engine[1412]: I20250712 00:20:15.922186 1412 update_check_scheduler.cc:74] Next update check in 6m54s Jul 12 00:20:15.922476 systemd[1]: Started update-engine.service - Update Engine. Jul 12 00:20:15.932849 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 00:20:15.933970 systemd-logind[1410]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 00:20:15.934455 systemd-logind[1410]: New seat seat0. Jul 12 00:20:15.937689 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 00:20:15.944243 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 00:20:15.968692 extend-filesystems[1429]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 00:20:15.968692 extend-filesystems[1429]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 00:20:15.968692 extend-filesystems[1429]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 00:20:15.975171 extend-filesystems[1399]: Resized filesystem in /dev/vda9 Jul 12 00:20:15.976623 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 00:20:15.978706 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 00:20:15.984856 bash[1450]: Updated "/home/core/.ssh/authorized_keys" Jul 12 00:20:15.986592 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 00:20:15.988596 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 00:20:15.990573 locksmithd[1446]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 00:20:16.116199 containerd[1428]: time="2025-07-12T00:20:16.116105751Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 12 00:20:16.147026 containerd[1428]: time="2025-07-12T00:20:16.146974588Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:20:16.148712 containerd[1428]: time="2025-07-12T00:20:16.148645591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:20:16.148767 containerd[1428]: time="2025-07-12T00:20:16.148712695Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 12 00:20:16.148767 containerd[1428]: time="2025-07-12T00:20:16.148730909Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 12 00:20:16.148925 containerd[1428]: time="2025-07-12T00:20:16.148902623Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 12 00:20:16.148959 containerd[1428]: time="2025-07-12T00:20:16.148927147Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 12 00:20:16.149006 containerd[1428]: time="2025-07-12T00:20:16.148986942Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:20:16.149006 containerd[1428]: time="2025-07-12T00:20:16.149003718Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:20:16.149226 containerd[1428]: time="2025-07-12T00:20:16.149200236Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:20:16.149253 containerd[1428]: time="2025-07-12T00:20:16.149232030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 12 00:20:16.149253 containerd[1428]: time="2025-07-12T00:20:16.149246849Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:20:16.149288 containerd[1428]: time="2025-07-12T00:20:16.149258312Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 12 00:20:16.149668 containerd[1428]: time="2025-07-12T00:20:16.149338717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:20:16.149668 containerd[1428]: time="2025-07-12T00:20:16.149558602Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 12 00:20:16.149711 containerd[1428]: time="2025-07-12T00:20:16.149688256Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 12 00:20:16.149711 containerd[1428]: time="2025-07-12T00:20:16.149707108Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 12 00:20:16.149819 containerd[1428]: time="2025-07-12T00:20:16.149795941Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 12 00:20:16.149866 containerd[1428]: time="2025-07-12T00:20:16.149850223Z" level=info msg="metadata content store policy set" policy=shared Jul 12 00:20:16.153592 containerd[1428]: time="2025-07-12T00:20:16.153562138Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 12 00:20:16.153646 containerd[1428]: time="2025-07-12T00:20:16.153619895Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 12 00:20:16.153680 containerd[1428]: time="2025-07-12T00:20:16.153645978Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 12 00:20:16.153779 containerd[1428]: time="2025-07-12T00:20:16.153761013Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 12 00:20:16.153804 containerd[1428]: time="2025-07-12T00:20:16.153790730Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 12 00:20:16.154388 containerd[1428]: time="2025-07-12T00:20:16.153932847Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 12 00:20:16.154388 containerd[1428]: time="2025-07-12T00:20:16.154239686Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 12 00:20:16.154388 containerd[1428]: time="2025-07-12T00:20:16.154351047Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 12 00:20:16.154388 containerd[1428]: time="2025-07-12T00:20:16.154367503Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 12 00:20:16.154388 containerd[1428]: time="2025-07-12T00:20:16.154384439Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 12 00:20:16.154388 containerd[1428]: time="2025-07-12T00:20:16.154399018Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 12 00:20:16.154622 containerd[1428]: time="2025-07-12T00:20:16.154414556Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 12 00:20:16.154622 containerd[1428]: time="2025-07-12T00:20:16.154428016Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 12 00:20:16.154622 containerd[1428]: time="2025-07-12T00:20:16.154441597Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 12 00:20:16.154622 containerd[1428]: time="2025-07-12T00:20:16.154457134Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 12 00:20:16.154622 containerd[1428]: time="2025-07-12T00:20:16.154471114Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 12 00:20:16.154622 containerd[1428]: time="2025-07-12T00:20:16.154484535Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 12 00:20:16.154622 containerd[1428]: time="2025-07-12T00:20:16.154497117Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 12 00:20:16.154622 containerd[1428]: time="2025-07-12T00:20:16.154517688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.154622 containerd[1428]: time="2025-07-12T00:20:16.154532426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.154622 containerd[1428]: time="2025-07-12T00:20:16.154544449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.154622 containerd[1428]: time="2025-07-12T00:20:16.154557670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.154622 containerd[1428]: time="2025-07-12T00:20:16.154569653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.154622 containerd[1428]: time="2025-07-12T00:20:16.154583553Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.154622 containerd[1428]: time="2025-07-12T00:20:16.154601887Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.155067 containerd[1428]: time="2025-07-12T00:20:16.154618064Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.155067 containerd[1428]: time="2025-07-12T00:20:16.154635878Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.155067 containerd[1428]: time="2025-07-12T00:20:16.154654332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.155067 containerd[1428]: time="2025-07-12T00:20:16.154707455Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.155067 containerd[1428]: time="2025-07-12T00:20:16.154720597Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.155067 containerd[1428]: time="2025-07-12T00:20:16.154736973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.155067 containerd[1428]: time="2025-07-12T00:20:16.154753829Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 12 00:20:16.155067 containerd[1428]: time="2025-07-12T00:20:16.154774319Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.155067 containerd[1428]: time="2025-07-12T00:20:16.154792653Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.155067 containerd[1428]: time="2025-07-12T00:20:16.154803797Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 12 00:20:16.155486 containerd[1428]: time="2025-07-12T00:20:16.155455422Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 12 00:20:16.155666 containerd[1428]: time="2025-07-12T00:20:16.155645390Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 12 00:20:16.155707 containerd[1428]: time="2025-07-12T00:20:16.155673390Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 12 00:20:16.155707 containerd[1428]: time="2025-07-12T00:20:16.155687649Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 12 00:20:16.155707 containerd[1428]: time="2025-07-12T00:20:16.155698394Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.155801 containerd[1428]: time="2025-07-12T00:20:16.155711735Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 12 00:20:16.155801 containerd[1428]: time="2025-07-12T00:20:16.155722359Z" level=info msg="NRI interface is disabled by configuration." Jul 12 00:20:16.155801 containerd[1428]: time="2025-07-12T00:20:16.155747843Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 12 00:20:16.156188 containerd[1428]: time="2025-07-12T00:20:16.156112640Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 12 00:20:16.156188 containerd[1428]: time="2025-07-12T00:20:16.156188531Z" level=info msg="Connect containerd service" Jul 12 00:20:16.156440 containerd[1428]: time="2025-07-12T00:20:16.156228793Z" level=info msg="using legacy CRI server" Jul 12 00:20:16.156440 containerd[1428]: time="2025-07-12T00:20:16.156237940Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 00:20:16.156440 containerd[1428]: time="2025-07-12T00:20:16.156318225Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 12 00:20:16.157146 containerd[1428]: time="2025-07-12T00:20:16.157104417Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:20:16.157387 containerd[1428]: time="2025-07-12T00:20:16.157352181Z" level=info msg="Start subscribing containerd event" Jul 12 00:20:16.157387 containerd[1428]: time="2025-07-12T00:20:16.157404586Z" level=info msg="Start recovering state" Jul 12 00:20:16.157387 containerd[1428]: time="2025-07-12T00:20:16.157477282Z" level=info msg="Start event monitor" Jul 12 00:20:16.157585 containerd[1428]: time="2025-07-12T00:20:16.157488426Z" level=info msg="Start snapshots syncer" Jul 12 00:20:16.157585 containerd[1428]: time="2025-07-12T00:20:16.157497133Z" level=info msg="Start cni network conf syncer for default" Jul 12 00:20:16.157585 containerd[1428]: time="2025-07-12T00:20:16.157505521Z" level=info msg="Start streaming server" Jul 12 00:20:16.158690 containerd[1428]: time="2025-07-12T00:20:16.158528734Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 00:20:16.158690 containerd[1428]: time="2025-07-12T00:20:16.158590245Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 00:20:16.158690 containerd[1428]: time="2025-07-12T00:20:16.158652476Z" level=info msg="containerd successfully booted in 0.045446s" Jul 12 00:20:16.158786 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 00:20:16.314008 tar[1421]: linux-arm64/README.md Jul 12 00:20:16.331555 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 00:20:16.800700 sshd_keygen[1420]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 00:20:16.820157 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 00:20:16.827009 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 00:20:16.832932 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 00:20:16.833137 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 00:20:16.836856 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 00:20:16.849600 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 00:20:16.852561 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 00:20:16.854908 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 00:20:16.856290 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 00:20:16.992836 systemd-networkd[1347]: eth0: Gained IPv6LL Jul 12 00:20:16.995455 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 00:20:16.997477 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 00:20:17.019045 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 12 00:20:17.021808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:20:17.024015 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 00:20:17.039522 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 00:20:17.039822 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 12 00:20:17.041544 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 00:20:17.044603 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 00:20:17.631284 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:20:17.633282 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 00:20:17.637782 systemd[1]: Startup finished in 645ms (kernel) + 5.334s (initrd) + 3.592s (userspace) = 9.572s. Jul 12 00:20:17.639827 (kubelet)[1510]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:20:18.094169 kubelet[1510]: E0712 00:20:18.094042 1510 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:20:18.096556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:20:18.096796 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:20:21.441507 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 00:20:21.442647 systemd[1]: Started sshd@0-10.0.0.97:22-10.0.0.1:52548.service - OpenSSH per-connection server daemon (10.0.0.1:52548). Jul 12 00:20:21.524742 sshd[1524]: Accepted publickey for core from 10.0.0.1 port 52548 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:20:21.529176 sshd[1524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:20:21.551614 systemd-logind[1410]: New session 1 of user core. Jul 12 00:20:21.552626 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 00:20:21.562948 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 00:20:21.572996 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 00:20:21.575363 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 00:20:21.581833 (systemd)[1528]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 00:20:21.654693 systemd[1528]: Queued start job for default target default.target. Jul 12 00:20:21.667635 systemd[1528]: Created slice app.slice - User Application Slice. Jul 12 00:20:21.667688 systemd[1528]: Reached target paths.target - Paths. Jul 12 00:20:21.667702 systemd[1528]: Reached target timers.target - Timers. Jul 12 00:20:21.669019 systemd[1528]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 00:20:21.678997 systemd[1528]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 00:20:21.679060 systemd[1528]: Reached target sockets.target - Sockets. Jul 12 00:20:21.679073 systemd[1528]: Reached target basic.target - Basic System. Jul 12 00:20:21.679110 systemd[1528]: Reached target default.target - Main User Target. Jul 12 00:20:21.679137 systemd[1528]: Startup finished in 91ms. Jul 12 00:20:21.679455 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 00:20:21.680937 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 00:20:21.748325 systemd[1]: Started sshd@1-10.0.0.97:22-10.0.0.1:52552.service - OpenSSH per-connection server daemon (10.0.0.1:52552). Jul 12 00:20:21.787696 sshd[1539]: Accepted publickey for core from 10.0.0.1 port 52552 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:20:21.789021 sshd[1539]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:20:21.794205 systemd-logind[1410]: New session 2 of user core. Jul 12 00:20:21.802856 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 00:20:21.854365 sshd[1539]: pam_unix(sshd:session): session closed for user core Jul 12 00:20:21.870049 systemd[1]: sshd@1-10.0.0.97:22-10.0.0.1:52552.service: Deactivated successfully. Jul 12 00:20:21.871554 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 00:20:21.873053 systemd-logind[1410]: Session 2 logged out. Waiting for processes to exit. Jul 12 00:20:21.882915 systemd[1]: Started sshd@2-10.0.0.97:22-10.0.0.1:52566.service - OpenSSH per-connection server daemon (10.0.0.1:52566). Jul 12 00:20:21.883925 systemd-logind[1410]: Removed session 2. Jul 12 00:20:21.912471 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 52566 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:20:21.913753 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:20:21.917827 systemd-logind[1410]: New session 3 of user core. Jul 12 00:20:21.924812 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 00:20:21.972482 sshd[1546]: pam_unix(sshd:session): session closed for user core Jul 12 00:20:21.990429 systemd[1]: sshd@2-10.0.0.97:22-10.0.0.1:52566.service: Deactivated successfully. Jul 12 00:20:21.992098 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 00:20:21.993415 systemd-logind[1410]: Session 3 logged out. Waiting for processes to exit. Jul 12 00:20:21.995860 systemd[1]: Started sshd@3-10.0.0.97:22-10.0.0.1:52582.service - OpenSSH per-connection server daemon (10.0.0.1:52582). Jul 12 00:20:21.996778 systemd-logind[1410]: Removed session 3. Jul 12 00:20:22.027898 sshd[1553]: Accepted publickey for core from 10.0.0.1 port 52582 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:20:22.029185 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:20:22.033416 systemd-logind[1410]: New session 4 of user core. Jul 12 00:20:22.046812 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 00:20:22.100356 sshd[1553]: pam_unix(sshd:session): session closed for user core Jul 12 00:20:22.114157 systemd[1]: sshd@3-10.0.0.97:22-10.0.0.1:52582.service: Deactivated successfully. Jul 12 00:20:22.115707 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 00:20:22.117981 systemd-logind[1410]: Session 4 logged out. Waiting for processes to exit. Jul 12 00:20:22.118386 systemd[1]: Started sshd@4-10.0.0.97:22-10.0.0.1:52586.service - OpenSSH per-connection server daemon (10.0.0.1:52586). Jul 12 00:20:22.119551 systemd-logind[1410]: Removed session 4. Jul 12 00:20:22.150882 sshd[1560]: Accepted publickey for core from 10.0.0.1 port 52586 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:20:22.152074 sshd[1560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:20:22.155838 systemd-logind[1410]: New session 5 of user core. Jul 12 00:20:22.165806 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 00:20:22.234542 sudo[1563]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 00:20:22.234873 sudo[1563]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:20:22.245532 sudo[1563]: pam_unix(sudo:session): session closed for user root Jul 12 00:20:22.247230 sshd[1560]: pam_unix(sshd:session): session closed for user core Jul 12 00:20:22.254105 systemd[1]: sshd@4-10.0.0.97:22-10.0.0.1:52586.service: Deactivated successfully. Jul 12 00:20:22.255567 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 00:20:22.257845 systemd-logind[1410]: Session 5 logged out. Waiting for processes to exit. Jul 12 00:20:22.264906 systemd[1]: Started sshd@5-10.0.0.97:22-10.0.0.1:52588.service - OpenSSH per-connection server daemon (10.0.0.1:52588). Jul 12 00:20:22.265740 systemd-logind[1410]: Removed session 5. Jul 12 00:20:22.294071 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 52588 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:20:22.295545 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:20:22.299547 systemd-logind[1410]: New session 6 of user core. Jul 12 00:20:22.308872 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 00:20:22.360042 sudo[1572]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 00:20:22.360334 sudo[1572]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:20:22.363384 sudo[1572]: pam_unix(sudo:session): session closed for user root Jul 12 00:20:22.368110 sudo[1571]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 12 00:20:22.368381 sudo[1571]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:20:22.393948 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 12 00:20:22.395336 auditctl[1575]: No rules Jul 12 00:20:22.396254 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 00:20:22.396477 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 12 00:20:22.398270 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 12 00:20:22.423694 augenrules[1593]: No rules Jul 12 00:20:22.425739 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 12 00:20:22.427018 sudo[1571]: pam_unix(sudo:session): session closed for user root Jul 12 00:20:22.428766 sshd[1568]: pam_unix(sshd:session): session closed for user core Jul 12 00:20:22.438163 systemd[1]: sshd@5-10.0.0.97:22-10.0.0.1:52588.service: Deactivated successfully. Jul 12 00:20:22.439841 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 00:20:22.441893 systemd-logind[1410]: Session 6 logged out. Waiting for processes to exit. Jul 12 00:20:22.443129 systemd[1]: Started sshd@6-10.0.0.97:22-10.0.0.1:52594.service - OpenSSH per-connection server daemon (10.0.0.1:52594). Jul 12 00:20:22.444121 systemd-logind[1410]: Removed session 6. Jul 12 00:20:22.476777 sshd[1601]: Accepted publickey for core from 10.0.0.1 port 52594 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:20:22.478149 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:20:22.482198 systemd-logind[1410]: New session 7 of user core. Jul 12 00:20:22.495832 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 00:20:22.546646 sudo[1604]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 00:20:22.546970 sudo[1604]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 00:20:22.898949 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 00:20:22.899090 (dockerd)[1622]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 00:20:23.163511 dockerd[1622]: time="2025-07-12T00:20:23.163381634Z" level=info msg="Starting up" Jul 12 00:20:23.315168 dockerd[1622]: time="2025-07-12T00:20:23.315073282Z" level=info msg="Loading containers: start." Jul 12 00:20:23.404726 kernel: Initializing XFRM netlink socket Jul 12 00:20:23.467429 systemd-networkd[1347]: docker0: Link UP Jul 12 00:20:23.486008 dockerd[1622]: time="2025-07-12T00:20:23.485941869Z" level=info msg="Loading containers: done." Jul 12 00:20:23.500611 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1962749454-merged.mount: Deactivated successfully. Jul 12 00:20:23.503880 dockerd[1622]: time="2025-07-12T00:20:23.503833004Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 00:20:23.503982 dockerd[1622]: time="2025-07-12T00:20:23.503950509Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jul 12 00:20:23.504071 dockerd[1622]: time="2025-07-12T00:20:23.504055069Z" level=info msg="Daemon has completed initialization" Jul 12 00:20:23.531327 dockerd[1622]: time="2025-07-12T00:20:23.531140127Z" level=info msg="API listen on /run/docker.sock" Jul 12 00:20:23.531419 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 00:20:24.071288 containerd[1428]: time="2025-07-12T00:20:24.071214682Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 12 00:20:24.717882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount424330757.mount: Deactivated successfully. Jul 12 00:20:25.565020 containerd[1428]: time="2025-07-12T00:20:25.564960231Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:25.566099 containerd[1428]: time="2025-07-12T00:20:25.566058808Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=26328196" Jul 12 00:20:25.567165 containerd[1428]: time="2025-07-12T00:20:25.567123261Z" level=info msg="ImageCreate event name:\"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:25.571189 containerd[1428]: time="2025-07-12T00:20:25.571113081Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:25.572198 containerd[1428]: time="2025-07-12T00:20:25.572164069Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"26324994\" in 1.500905036s" Jul 12 00:20:25.572261 containerd[1428]: time="2025-07-12T00:20:25.572203067Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:4ee56e04a4dd8fbc5a022e324327ae1f9b19bdaab8a79644d85d29b70d28e87a\"" Jul 12 00:20:25.573066 containerd[1428]: time="2025-07-12T00:20:25.572968562Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 12 00:20:26.552290 containerd[1428]: time="2025-07-12T00:20:26.552236458Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:26.552816 containerd[1428]: time="2025-07-12T00:20:26.552782847Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=22529230" Jul 12 00:20:26.553752 containerd[1428]: time="2025-07-12T00:20:26.553719789Z" level=info msg="ImageCreate event name:\"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:26.556743 containerd[1428]: time="2025-07-12T00:20:26.556704753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:26.557936 containerd[1428]: time="2025-07-12T00:20:26.557887439Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"24065018\" in 984.884474ms" Jul 12 00:20:26.557936 containerd[1428]: time="2025-07-12T00:20:26.557923561Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:3451c4b5bd601398c65e0579f1b720df4e0edde78f7f38e142f2b0be5e9bd038\"" Jul 12 00:20:26.558474 containerd[1428]: time="2025-07-12T00:20:26.558422600Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 12 00:20:27.583193 containerd[1428]: time="2025-07-12T00:20:27.583140188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:27.584119 containerd[1428]: time="2025-07-12T00:20:27.583901058Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=17484143" Jul 12 00:20:27.584813 containerd[1428]: time="2025-07-12T00:20:27.584779410Z" level=info msg="ImageCreate event name:\"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:27.588905 containerd[1428]: time="2025-07-12T00:20:27.588867076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:27.590856 containerd[1428]: time="2025-07-12T00:20:27.590716206Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"19019949\" in 1.032235946s" Jul 12 00:20:27.590856 containerd[1428]: time="2025-07-12T00:20:27.590753729Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:3d72026a3748f31411df93e4aaa9c67944b7e0cc311c11eba2aae5e615213d5f\"" Jul 12 00:20:27.591403 containerd[1428]: time="2025-07-12T00:20:27.591352643Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 12 00:20:28.141043 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 00:20:28.154263 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:20:28.269693 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:20:28.273399 (kubelet)[1840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 00:20:28.309864 kubelet[1840]: E0712 00:20:28.309800 1840 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 00:20:28.313279 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 00:20:28.313436 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 00:20:28.552808 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1216842597.mount: Deactivated successfully. Jul 12 00:20:28.784142 containerd[1428]: time="2025-07-12T00:20:28.784092839Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:28.785040 containerd[1428]: time="2025-07-12T00:20:28.784859608Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=27378408" Jul 12 00:20:28.785770 containerd[1428]: time="2025-07-12T00:20:28.785699785Z" level=info msg="ImageCreate event name:\"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:28.787745 containerd[1428]: time="2025-07-12T00:20:28.787691994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:28.788445 containerd[1428]: time="2025-07-12T00:20:28.788413007Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"27377425\" in 1.196999386s" Jul 12 00:20:28.788509 containerd[1428]: time="2025-07-12T00:20:28.788448732Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:e29293ef7b817bb7b03ce7484edafe6ca0a7087e54074e7d7dcd3bd3c762eee9\"" Jul 12 00:20:28.789150 containerd[1428]: time="2025-07-12T00:20:28.788992120Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 12 00:20:29.317925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount314814349.mount: Deactivated successfully. Jul 12 00:20:30.066795 containerd[1428]: time="2025-07-12T00:20:30.066441652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:30.067950 containerd[1428]: time="2025-07-12T00:20:30.067903548Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 12 00:20:30.092948 containerd[1428]: time="2025-07-12T00:20:30.092898411Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:30.095989 containerd[1428]: time="2025-07-12T00:20:30.095946489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:30.097191 containerd[1428]: time="2025-07-12T00:20:30.097155737Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.308127931s" Jul 12 00:20:30.097230 containerd[1428]: time="2025-07-12T00:20:30.097192743Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 12 00:20:30.097639 containerd[1428]: time="2025-07-12T00:20:30.097613996Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 00:20:30.605138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2934157042.mount: Deactivated successfully. Jul 12 00:20:30.611970 containerd[1428]: time="2025-07-12T00:20:30.611932791Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:30.612723 containerd[1428]: time="2025-07-12T00:20:30.612685020Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 12 00:20:30.613286 containerd[1428]: time="2025-07-12T00:20:30.613260531Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:30.615608 containerd[1428]: time="2025-07-12T00:20:30.615563534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:30.616626 containerd[1428]: time="2025-07-12T00:20:30.616580159Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 518.940507ms" Jul 12 00:20:30.616626 containerd[1428]: time="2025-07-12T00:20:30.616620083Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 00:20:30.617375 containerd[1428]: time="2025-07-12T00:20:30.617306132Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 12 00:20:31.153304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3697375989.mount: Deactivated successfully. Jul 12 00:20:32.629499 containerd[1428]: time="2025-07-12T00:20:32.629440588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:32.630266 containerd[1428]: time="2025-07-12T00:20:32.630218877Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67812471" Jul 12 00:20:32.630828 containerd[1428]: time="2025-07-12T00:20:32.630804891Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:32.634553 containerd[1428]: time="2025-07-12T00:20:32.634515250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:20:32.635979 containerd[1428]: time="2025-07-12T00:20:32.635935465Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.018532222s" Jul 12 00:20:32.636012 containerd[1428]: time="2025-07-12T00:20:32.635975830Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jul 12 00:20:38.185503 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:20:38.196328 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:20:38.216983 systemd[1]: Reloading requested from client PID 1998 ('systemctl') (unit session-7.scope)... Jul 12 00:20:38.217009 systemd[1]: Reloading... Jul 12 00:20:38.286702 zram_generator::config[2035]: No configuration found. Jul 12 00:20:38.373848 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:20:38.427917 systemd[1]: Reloading finished in 210 ms. Jul 12 00:20:38.467995 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:20:38.470616 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:20:38.470833 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:20:38.472383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:20:38.582274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:20:38.586380 (kubelet)[2084]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:20:38.618414 kubelet[2084]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:20:38.618414 kubelet[2084]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:20:38.618414 kubelet[2084]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:20:38.618753 kubelet[2084]: I0712 00:20:38.618466 2084 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:20:39.018082 kubelet[2084]: I0712 00:20:39.018035 2084 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:20:39.018082 kubelet[2084]: I0712 00:20:39.018070 2084 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:20:39.018356 kubelet[2084]: I0712 00:20:39.018331 2084 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:20:39.053311 kubelet[2084]: E0712 00:20:39.053246 2084 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.97:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:39.054946 kubelet[2084]: I0712 00:20:39.054926 2084 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:20:39.058978 kubelet[2084]: E0712 00:20:39.058951 2084 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:20:39.058978 kubelet[2084]: I0712 00:20:39.058980 2084 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:20:39.061519 kubelet[2084]: I0712 00:20:39.061483 2084 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:20:39.062704 kubelet[2084]: I0712 00:20:39.062648 2084 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:20:39.062866 kubelet[2084]: I0712 00:20:39.062703 2084 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:20:39.062944 kubelet[2084]: I0712 00:20:39.062935 2084 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:20:39.062944 kubelet[2084]: I0712 00:20:39.062944 2084 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:20:39.063156 kubelet[2084]: I0712 00:20:39.063132 2084 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:20:39.068738 kubelet[2084]: I0712 00:20:39.068549 2084 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:20:39.068738 kubelet[2084]: I0712 00:20:39.068578 2084 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:20:39.068738 kubelet[2084]: I0712 00:20:39.068600 2084 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:20:39.068738 kubelet[2084]: I0712 00:20:39.068610 2084 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:20:39.071048 kubelet[2084]: W0712 00:20:39.071002 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 12 00:20:39.071166 kubelet[2084]: E0712 00:20:39.071148 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:39.071921 kubelet[2084]: I0712 00:20:39.071900 2084 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:20:39.072340 kubelet[2084]: W0712 00:20:39.071946 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 12 00:20:39.072340 kubelet[2084]: E0712 00:20:39.072106 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:39.073161 kubelet[2084]: I0712 00:20:39.073126 2084 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:20:39.073272 kubelet[2084]: W0712 00:20:39.073253 2084 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 00:20:39.074106 kubelet[2084]: I0712 00:20:39.074076 2084 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:20:39.074155 kubelet[2084]: I0712 00:20:39.074113 2084 server.go:1287] "Started kubelet" Jul 12 00:20:39.075148 kubelet[2084]: I0712 00:20:39.075118 2084 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:20:39.076166 kubelet[2084]: I0712 00:20:39.076129 2084 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:20:39.076728 kubelet[2084]: I0712 00:20:39.076576 2084 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:20:39.078275 kubelet[2084]: I0712 00:20:39.078218 2084 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:20:39.078468 kubelet[2084]: I0712 00:20:39.078445 2084 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:20:39.079409 kubelet[2084]: I0712 00:20:39.079359 2084 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:20:39.080033 kubelet[2084]: I0712 00:20:39.080012 2084 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:20:39.081229 kubelet[2084]: I0712 00:20:39.080945 2084 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:20:39.081229 kubelet[2084]: I0712 00:20:39.081003 2084 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:20:39.081596 kubelet[2084]: E0712 00:20:39.081563 2084 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:20:39.084569 kubelet[2084]: I0712 00:20:39.084521 2084 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:20:39.084641 kubelet[2084]: I0712 00:20:39.084620 2084 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:20:39.084830 kubelet[2084]: W0712 00:20:39.084793 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 12 00:20:39.084866 kubelet[2084]: E0712 00:20:39.084834 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:39.084978 kubelet[2084]: E0712 00:20:39.084935 2084 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="200ms" Jul 12 00:20:39.086028 kubelet[2084]: E0712 00:20:39.085461 2084 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.97:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.97:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1851590c3a2945bc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 00:20:39.074096572 +0000 UTC m=+0.484873272,LastTimestamp:2025-07-12 00:20:39.074096572 +0000 UTC m=+0.484873272,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 00:20:39.087080 kubelet[2084]: I0712 00:20:39.086578 2084 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:20:39.095414 kubelet[2084]: I0712 00:20:39.095370 2084 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:20:39.096401 kubelet[2084]: I0712 00:20:39.096375 2084 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:20:39.096401 kubelet[2084]: I0712 00:20:39.096395 2084 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:20:39.096465 kubelet[2084]: I0712 00:20:39.096411 2084 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:20:39.096465 kubelet[2084]: I0712 00:20:39.096419 2084 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:20:39.096507 kubelet[2084]: E0712 00:20:39.096458 2084 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:20:39.099968 kubelet[2084]: W0712 00:20:39.099920 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 12 00:20:39.100029 kubelet[2084]: E0712 00:20:39.099975 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:39.100646 kubelet[2084]: I0712 00:20:39.100631 2084 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:20:39.100646 kubelet[2084]: I0712 00:20:39.100646 2084 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:20:39.100733 kubelet[2084]: I0712 00:20:39.100721 2084 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:20:39.178265 kubelet[2084]: I0712 00:20:39.178222 2084 policy_none.go:49] "None policy: Start" Jul 12 00:20:39.178265 kubelet[2084]: I0712 00:20:39.178268 2084 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:20:39.178399 kubelet[2084]: I0712 00:20:39.178282 2084 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:20:39.181762 kubelet[2084]: E0712 00:20:39.181731 2084 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:20:39.184218 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 00:20:39.197518 kubelet[2084]: E0712 00:20:39.197449 2084 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 00:20:39.201095 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 00:20:39.203801 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 00:20:39.216505 kubelet[2084]: I0712 00:20:39.216458 2084 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:20:39.216686 kubelet[2084]: I0712 00:20:39.216648 2084 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:20:39.216738 kubelet[2084]: I0712 00:20:39.216678 2084 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:20:39.217420 kubelet[2084]: I0712 00:20:39.217005 2084 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:20:39.218138 kubelet[2084]: E0712 00:20:39.218108 2084 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:20:39.218195 kubelet[2084]: E0712 00:20:39.218150 2084 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 00:20:39.285651 kubelet[2084]: E0712 00:20:39.285513 2084 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="400ms" Jul 12 00:20:39.318687 kubelet[2084]: I0712 00:20:39.318643 2084 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:20:39.319133 kubelet[2084]: E0712 00:20:39.319090 2084 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 12 00:20:39.406595 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 12 00:20:39.436027 kubelet[2084]: E0712 00:20:39.435981 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:20:39.438768 systemd[1]: Created slice kubepods-burstable-pod990c5fd1ce43ac0cac8b76da829366ef.slice - libcontainer container kubepods-burstable-pod990c5fd1ce43ac0cac8b76da829366ef.slice. Jul 12 00:20:39.453841 kubelet[2084]: E0712 00:20:39.453641 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:20:39.456002 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 12 00:20:39.457388 kubelet[2084]: E0712 00:20:39.457367 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:20:39.482706 kubelet[2084]: I0712 00:20:39.482634 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:39.482706 kubelet[2084]: I0712 00:20:39.482684 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:39.482706 kubelet[2084]: I0712 00:20:39.482703 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:39.482854 kubelet[2084]: I0712 00:20:39.482722 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:39.482854 kubelet[2084]: I0712 00:20:39.482763 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:20:39.482854 kubelet[2084]: I0712 00:20:39.482794 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/990c5fd1ce43ac0cac8b76da829366ef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"990c5fd1ce43ac0cac8b76da829366ef\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:39.482854 kubelet[2084]: I0712 00:20:39.482822 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/990c5fd1ce43ac0cac8b76da829366ef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"990c5fd1ce43ac0cac8b76da829366ef\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:39.482854 kubelet[2084]: I0712 00:20:39.482849 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/990c5fd1ce43ac0cac8b76da829366ef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"990c5fd1ce43ac0cac8b76da829366ef\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:39.482951 kubelet[2084]: I0712 00:20:39.482865 2084 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:39.520721 kubelet[2084]: I0712 00:20:39.520697 2084 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:20:39.521090 kubelet[2084]: E0712 00:20:39.521040 2084 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 12 00:20:39.686920 kubelet[2084]: E0712 00:20:39.686869 2084 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="800ms" Jul 12 00:20:39.737295 kubelet[2084]: E0712 00:20:39.737257 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:39.737972 containerd[1428]: time="2025-07-12T00:20:39.737878973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 12 00:20:39.754302 kubelet[2084]: E0712 00:20:39.754209 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:39.754728 containerd[1428]: time="2025-07-12T00:20:39.754686845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:990c5fd1ce43ac0cac8b76da829366ef,Namespace:kube-system,Attempt:0,}" Jul 12 00:20:39.758413 kubelet[2084]: E0712 00:20:39.758324 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:39.758772 containerd[1428]: time="2025-07-12T00:20:39.758737367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 12 00:20:39.922302 kubelet[2084]: I0712 00:20:39.922255 2084 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:20:39.922621 kubelet[2084]: E0712 00:20:39.922594 2084 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 12 00:20:40.166279 kubelet[2084]: W0712 00:20:40.166215 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 12 00:20:40.166397 kubelet[2084]: E0712 00:20:40.166283 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.97:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:40.178893 kubelet[2084]: W0712 00:20:40.178857 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 12 00:20:40.178893 kubelet[2084]: E0712 00:20:40.178884 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.97:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:40.281135 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount95792170.mount: Deactivated successfully. Jul 12 00:20:40.285378 containerd[1428]: time="2025-07-12T00:20:40.285288534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:20:40.287587 containerd[1428]: time="2025-07-12T00:20:40.287529715Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:20:40.288721 containerd[1428]: time="2025-07-12T00:20:40.288656161Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 12 00:20:40.289263 containerd[1428]: time="2025-07-12T00:20:40.289179731Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:20:40.289784 containerd[1428]: time="2025-07-12T00:20:40.289740676Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:20:40.290620 containerd[1428]: time="2025-07-12T00:20:40.290556050Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:20:40.291262 containerd[1428]: time="2025-07-12T00:20:40.291231758Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 12 00:20:40.293413 containerd[1428]: time="2025-07-12T00:20:40.293317363Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 00:20:40.294861 containerd[1428]: time="2025-07-12T00:20:40.294832829Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 556.851206ms" Jul 12 00:20:40.298836 containerd[1428]: time="2025-07-12T00:20:40.298580042Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 539.768805ms" Jul 12 00:20:40.299341 containerd[1428]: time="2025-07-12T00:20:40.299311432Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 544.540205ms" Jul 12 00:20:40.386795 kubelet[2084]: W0712 00:20:40.386716 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 12 00:20:40.386795 kubelet[2084]: E0712 00:20:40.386786 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.97:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:40.443874 containerd[1428]: time="2025-07-12T00:20:40.443686598Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:20:40.443874 containerd[1428]: time="2025-07-12T00:20:40.443742241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:20:40.443874 containerd[1428]: time="2025-07-12T00:20:40.443757871Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:40.444936 containerd[1428]: time="2025-07-12T00:20:40.444279882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:40.444936 containerd[1428]: time="2025-07-12T00:20:40.444118709Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:20:40.444936 containerd[1428]: time="2025-07-12T00:20:40.444165918Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:20:40.444936 containerd[1428]: time="2025-07-12T00:20:40.444177630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:40.444936 containerd[1428]: time="2025-07-12T00:20:40.444255098Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:40.445641 containerd[1428]: time="2025-07-12T00:20:40.445566021Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:20:40.445797 containerd[1428]: time="2025-07-12T00:20:40.445628060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:20:40.445797 containerd[1428]: time="2025-07-12T00:20:40.445745141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:40.445983 containerd[1428]: time="2025-07-12T00:20:40.445936853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:40.466839 systemd[1]: Started cri-containerd-32281bcc82a1168186df6dd086dc34b4934562ed166e2a1382f3e6b0a3f1ecfb.scope - libcontainer container 32281bcc82a1168186df6dd086dc34b4934562ed166e2a1382f3e6b0a3f1ecfb. Jul 12 00:20:40.471509 systemd[1]: Started cri-containerd-ec0c131fac96a9beb8fee297f8878ca7b0bd04942e58767e84f5f25295810f9f.scope - libcontainer container ec0c131fac96a9beb8fee297f8878ca7b0bd04942e58767e84f5f25295810f9f. Jul 12 00:20:40.472886 systemd[1]: Started cri-containerd-f70bed1c085a69173fdedf08abe64a9d60e6de7e535fa8cfbbd3dec6917d584e.scope - libcontainer container f70bed1c085a69173fdedf08abe64a9d60e6de7e535fa8cfbbd3dec6917d584e. Jul 12 00:20:40.487429 kubelet[2084]: E0712 00:20:40.487305 2084 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.97:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.97:6443: connect: connection refused" interval="1.6s" Jul 12 00:20:40.507588 containerd[1428]: time="2025-07-12T00:20:40.507542796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:990c5fd1ce43ac0cac8b76da829366ef,Namespace:kube-system,Attempt:0,} returns sandbox id \"f70bed1c085a69173fdedf08abe64a9d60e6de7e535fa8cfbbd3dec6917d584e\"" Jul 12 00:20:40.508469 containerd[1428]: time="2025-07-12T00:20:40.508440235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec0c131fac96a9beb8fee297f8878ca7b0bd04942e58767e84f5f25295810f9f\"" Jul 12 00:20:40.508658 containerd[1428]: time="2025-07-12T00:20:40.508628069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"32281bcc82a1168186df6dd086dc34b4934562ed166e2a1382f3e6b0a3f1ecfb\"" Jul 12 00:20:40.509060 kubelet[2084]: E0712 00:20:40.508888 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:40.509721 kubelet[2084]: E0712 00:20:40.509575 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:40.509721 kubelet[2084]: E0712 00:20:40.509616 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:40.511331 containerd[1428]: time="2025-07-12T00:20:40.511272780Z" level=info msg="CreateContainer within sandbox \"f70bed1c085a69173fdedf08abe64a9d60e6de7e535fa8cfbbd3dec6917d584e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 00:20:40.511449 containerd[1428]: time="2025-07-12T00:20:40.511422920Z" level=info msg="CreateContainer within sandbox \"32281bcc82a1168186df6dd086dc34b4934562ed166e2a1382f3e6b0a3f1ecfb\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 00:20:40.511575 containerd[1428]: time="2025-07-12T00:20:40.511299402Z" level=info msg="CreateContainer within sandbox \"ec0c131fac96a9beb8fee297f8878ca7b0bd04942e58767e84f5f25295810f9f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 00:20:40.586300 containerd[1428]: time="2025-07-12T00:20:40.586247818Z" level=info msg="CreateContainer within sandbox \"f70bed1c085a69173fdedf08abe64a9d60e6de7e535fa8cfbbd3dec6917d584e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9b67a8cc689c5b9f45a5cf76f2138147ea89c666b5f9ceb678aa86e5ee62c044\"" Jul 12 00:20:40.586937 containerd[1428]: time="2025-07-12T00:20:40.586902940Z" level=info msg="StartContainer for \"9b67a8cc689c5b9f45a5cf76f2138147ea89c666b5f9ceb678aa86e5ee62c044\"" Jul 12 00:20:40.587005 kubelet[2084]: W0712 00:20:40.586955 2084 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.97:6443: connect: connection refused Jul 12 00:20:40.587041 kubelet[2084]: E0712 00:20:40.587011 2084 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.97:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.97:6443: connect: connection refused" logger="UnhandledError" Jul 12 00:20:40.588918 containerd[1428]: time="2025-07-12T00:20:40.588870383Z" level=info msg="CreateContainer within sandbox \"ec0c131fac96a9beb8fee297f8878ca7b0bd04942e58767e84f5f25295810f9f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"0c2149fc68218122ac8ffcc038d70ca050929491ad36f708244d8fc5eb2fb428\"" Jul 12 00:20:40.589786 containerd[1428]: time="2025-07-12T00:20:40.589726331Z" level=info msg="StartContainer for \"0c2149fc68218122ac8ffcc038d70ca050929491ad36f708244d8fc5eb2fb428\"" Jul 12 00:20:40.589878 containerd[1428]: time="2025-07-12T00:20:40.589830621Z" level=info msg="CreateContainer within sandbox \"32281bcc82a1168186df6dd086dc34b4934562ed166e2a1382f3e6b0a3f1ecfb\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0ed02429701f2ef5f58b115c68a427151df94ebbf025b01c908dabdce6c93f66\"" Jul 12 00:20:40.590454 containerd[1428]: time="2025-07-12T00:20:40.590273765Z" level=info msg="StartContainer for \"0ed02429701f2ef5f58b115c68a427151df94ebbf025b01c908dabdce6c93f66\"" Jul 12 00:20:40.615917 systemd[1]: Started cri-containerd-9b67a8cc689c5b9f45a5cf76f2138147ea89c666b5f9ceb678aa86e5ee62c044.scope - libcontainer container 9b67a8cc689c5b9f45a5cf76f2138147ea89c666b5f9ceb678aa86e5ee62c044. Jul 12 00:20:40.620963 systemd[1]: Started cri-containerd-0c2149fc68218122ac8ffcc038d70ca050929491ad36f708244d8fc5eb2fb428.scope - libcontainer container 0c2149fc68218122ac8ffcc038d70ca050929491ad36f708244d8fc5eb2fb428. Jul 12 00:20:40.622193 systemd[1]: Started cri-containerd-0ed02429701f2ef5f58b115c68a427151df94ebbf025b01c908dabdce6c93f66.scope - libcontainer container 0ed02429701f2ef5f58b115c68a427151df94ebbf025b01c908dabdce6c93f66. Jul 12 00:20:40.689569 containerd[1428]: time="2025-07-12T00:20:40.689517766Z" level=info msg="StartContainer for \"9b67a8cc689c5b9f45a5cf76f2138147ea89c666b5f9ceb678aa86e5ee62c044\" returns successfully" Jul 12 00:20:40.689810 containerd[1428]: time="2025-07-12T00:20:40.689535434Z" level=info msg="StartContainer for \"0ed02429701f2ef5f58b115c68a427151df94ebbf025b01c908dabdce6c93f66\" returns successfully" Jul 12 00:20:40.689810 containerd[1428]: time="2025-07-12T00:20:40.689540990Z" level=info msg="StartContainer for \"0c2149fc68218122ac8ffcc038d70ca050929491ad36f708244d8fc5eb2fb428\" returns successfully" Jul 12 00:20:40.728642 kubelet[2084]: I0712 00:20:40.728535 2084 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:20:40.728934 kubelet[2084]: E0712 00:20:40.728886 2084 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.97:6443/api/v1/nodes\": dial tcp 10.0.0.97:6443: connect: connection refused" node="localhost" Jul 12 00:20:41.111748 kubelet[2084]: E0712 00:20:41.111620 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:20:41.112484 kubelet[2084]: E0712 00:20:41.112447 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:41.112738 kubelet[2084]: E0712 00:20:41.112714 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:20:41.112838 kubelet[2084]: E0712 00:20:41.112817 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:41.116189 kubelet[2084]: E0712 00:20:41.116157 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:20:41.116282 kubelet[2084]: E0712 00:20:41.116266 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:42.116814 kubelet[2084]: E0712 00:20:42.116590 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:20:42.116814 kubelet[2084]: E0712 00:20:42.116669 2084 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 00:20:42.116814 kubelet[2084]: E0712 00:20:42.116754 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:42.116814 kubelet[2084]: E0712 00:20:42.116763 2084 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:42.331716 kubelet[2084]: I0712 00:20:42.329997 2084 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:20:42.643115 kubelet[2084]: E0712 00:20:42.643064 2084 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 12 00:20:42.702574 kubelet[2084]: I0712 00:20:42.702490 2084 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 00:20:42.702574 kubelet[2084]: E0712 00:20:42.702549 2084 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 12 00:20:42.781532 kubelet[2084]: I0712 00:20:42.781473 2084 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:42.788680 kubelet[2084]: E0712 00:20:42.788451 2084 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:42.788680 kubelet[2084]: I0712 00:20:42.788482 2084 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:42.790272 kubelet[2084]: E0712 00:20:42.790245 2084 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:42.790527 kubelet[2084]: I0712 00:20:42.790371 2084 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:20:42.793357 kubelet[2084]: E0712 00:20:42.793328 2084 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 12 00:20:43.073743 kubelet[2084]: I0712 00:20:43.073261 2084 apiserver.go:52] "Watching apiserver" Jul 12 00:20:43.081750 kubelet[2084]: I0712 00:20:43.081706 2084 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:20:44.548774 systemd[1]: Reloading requested from client PID 2362 ('systemctl') (unit session-7.scope)... Jul 12 00:20:44.549141 systemd[1]: Reloading... Jul 12 00:20:44.631702 zram_generator::config[2401]: No configuration found. Jul 12 00:20:44.796262 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 00:20:44.860907 systemd[1]: Reloading finished in 311 ms. Jul 12 00:20:44.898614 kubelet[2084]: I0712 00:20:44.898559 2084 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:20:44.898700 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:20:44.912569 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 00:20:44.912863 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:20:44.926900 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 00:20:45.054537 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 00:20:45.059850 (kubelet)[2443]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 00:20:45.107362 kubelet[2443]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:20:45.107362 kubelet[2443]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 00:20:45.107362 kubelet[2443]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 00:20:45.107760 kubelet[2443]: I0712 00:20:45.107463 2443 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 00:20:45.119943 kubelet[2443]: I0712 00:20:45.118276 2443 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 12 00:20:45.119943 kubelet[2443]: I0712 00:20:45.118304 2443 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 00:20:45.119943 kubelet[2443]: I0712 00:20:45.118572 2443 server.go:954] "Client rotation is on, will bootstrap in background" Jul 12 00:20:45.120271 kubelet[2443]: I0712 00:20:45.120221 2443 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 12 00:20:45.122669 kubelet[2443]: I0712 00:20:45.122533 2443 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 00:20:45.128920 kubelet[2443]: E0712 00:20:45.128855 2443 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 12 00:20:45.128920 kubelet[2443]: I0712 00:20:45.128905 2443 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 12 00:20:45.132814 kubelet[2443]: I0712 00:20:45.132780 2443 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 00:20:45.133029 kubelet[2443]: I0712 00:20:45.132992 2443 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 00:20:45.133195 kubelet[2443]: I0712 00:20:45.133019 2443 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 00:20:45.133303 kubelet[2443]: I0712 00:20:45.133198 2443 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 00:20:45.133303 kubelet[2443]: I0712 00:20:45.133208 2443 container_manager_linux.go:304] "Creating device plugin manager" Jul 12 00:20:45.133303 kubelet[2443]: I0712 00:20:45.133250 2443 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:20:45.133400 kubelet[2443]: I0712 00:20:45.133383 2443 kubelet.go:446] "Attempting to sync node with API server" Jul 12 00:20:45.133400 kubelet[2443]: I0712 00:20:45.133398 2443 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 00:20:45.133461 kubelet[2443]: I0712 00:20:45.133414 2443 kubelet.go:352] "Adding apiserver pod source" Jul 12 00:20:45.133461 kubelet[2443]: I0712 00:20:45.133423 2443 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 00:20:45.134869 kubelet[2443]: I0712 00:20:45.134366 2443 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 12 00:20:45.134869 kubelet[2443]: I0712 00:20:45.134855 2443 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 12 00:20:45.135275 kubelet[2443]: I0712 00:20:45.135245 2443 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 00:20:45.135316 kubelet[2443]: I0712 00:20:45.135283 2443 server.go:1287] "Started kubelet" Jul 12 00:20:45.136046 kubelet[2443]: I0712 00:20:45.135373 2443 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 00:20:45.142407 kubelet[2443]: I0712 00:20:45.135611 2443 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 00:20:45.142407 kubelet[2443]: I0712 00:20:45.137899 2443 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 00:20:45.142407 kubelet[2443]: I0712 00:20:45.138993 2443 server.go:479] "Adding debug handlers to kubelet server" Jul 12 00:20:45.142407 kubelet[2443]: I0712 00:20:45.140045 2443 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 00:20:45.142407 kubelet[2443]: I0712 00:20:45.140416 2443 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 00:20:45.142407 kubelet[2443]: E0712 00:20:45.141096 2443 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 00:20:45.142407 kubelet[2443]: I0712 00:20:45.141125 2443 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 00:20:45.142407 kubelet[2443]: I0712 00:20:45.141269 2443 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 00:20:45.142407 kubelet[2443]: I0712 00:20:45.141378 2443 reconciler.go:26] "Reconciler: start to sync state" Jul 12 00:20:45.147132 kubelet[2443]: I0712 00:20:45.146075 2443 factory.go:221] Registration of the systemd container factory successfully Jul 12 00:20:45.147132 kubelet[2443]: I0712 00:20:45.146193 2443 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 00:20:45.157591 kubelet[2443]: I0712 00:20:45.157552 2443 factory.go:221] Registration of the containerd container factory successfully Jul 12 00:20:45.172832 kubelet[2443]: I0712 00:20:45.172768 2443 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 12 00:20:45.173965 kubelet[2443]: I0712 00:20:45.173910 2443 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 12 00:20:45.173965 kubelet[2443]: I0712 00:20:45.173945 2443 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 12 00:20:45.173965 kubelet[2443]: I0712 00:20:45.173966 2443 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 00:20:45.173965 kubelet[2443]: I0712 00:20:45.173974 2443 kubelet.go:2382] "Starting kubelet main sync loop" Jul 12 00:20:45.174115 kubelet[2443]: E0712 00:20:45.174022 2443 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 00:20:45.199553 kubelet[2443]: I0712 00:20:45.199458 2443 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 00:20:45.199553 kubelet[2443]: I0712 00:20:45.199480 2443 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 00:20:45.199553 kubelet[2443]: I0712 00:20:45.199512 2443 state_mem.go:36] "Initialized new in-memory state store" Jul 12 00:20:45.199863 kubelet[2443]: I0712 00:20:45.199727 2443 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 00:20:45.199863 kubelet[2443]: I0712 00:20:45.199740 2443 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 00:20:45.199863 kubelet[2443]: I0712 00:20:45.199760 2443 policy_none.go:49] "None policy: Start" Jul 12 00:20:45.199863 kubelet[2443]: I0712 00:20:45.199774 2443 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 00:20:45.199863 kubelet[2443]: I0712 00:20:45.199784 2443 state_mem.go:35] "Initializing new in-memory state store" Jul 12 00:20:45.200041 kubelet[2443]: I0712 00:20:45.199902 2443 state_mem.go:75] "Updated machine memory state" Jul 12 00:20:45.203857 kubelet[2443]: I0712 00:20:45.203835 2443 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 12 00:20:45.204014 kubelet[2443]: I0712 00:20:45.204001 2443 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 00:20:45.204295 kubelet[2443]: I0712 00:20:45.204038 2443 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 00:20:45.204295 kubelet[2443]: I0712 00:20:45.204221 2443 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 00:20:45.205174 kubelet[2443]: E0712 00:20:45.205154 2443 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 00:20:45.274776 kubelet[2443]: I0712 00:20:45.274727 2443 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:20:45.274964 kubelet[2443]: I0712 00:20:45.274727 2443 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:45.275312 kubelet[2443]: I0712 00:20:45.275297 2443 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:45.308282 kubelet[2443]: I0712 00:20:45.308250 2443 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 00:20:45.314309 kubelet[2443]: I0712 00:20:45.314278 2443 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 12 00:20:45.314435 kubelet[2443]: I0712 00:20:45.314353 2443 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 00:20:45.442407 kubelet[2443]: I0712 00:20:45.442366 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/990c5fd1ce43ac0cac8b76da829366ef-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"990c5fd1ce43ac0cac8b76da829366ef\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:45.442407 kubelet[2443]: I0712 00:20:45.442410 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:45.442570 kubelet[2443]: I0712 00:20:45.442432 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:45.442570 kubelet[2443]: I0712 00:20:45.442447 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/990c5fd1ce43ac0cac8b76da829366ef-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"990c5fd1ce43ac0cac8b76da829366ef\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:45.442570 kubelet[2443]: I0712 00:20:45.442464 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 12 00:20:45.442570 kubelet[2443]: I0712 00:20:45.442478 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/990c5fd1ce43ac0cac8b76da829366ef-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"990c5fd1ce43ac0cac8b76da829366ef\") " pod="kube-system/kube-apiserver-localhost" Jul 12 00:20:45.442570 kubelet[2443]: I0712 00:20:45.442494 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:45.442700 kubelet[2443]: I0712 00:20:45.442511 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:45.442700 kubelet[2443]: I0712 00:20:45.442526 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 00:20:45.549514 sudo[2483]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 12 00:20:45.549830 sudo[2483]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 12 00:20:45.585261 kubelet[2443]: E0712 00:20:45.585207 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:45.585261 kubelet[2443]: E0712 00:20:45.585254 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:45.585449 kubelet[2443]: E0712 00:20:45.585369 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:45.973081 sudo[2483]: pam_unix(sudo:session): session closed for user root Jul 12 00:20:46.134614 kubelet[2443]: I0712 00:20:46.134558 2443 apiserver.go:52] "Watching apiserver" Jul 12 00:20:46.141746 kubelet[2443]: I0712 00:20:46.141705 2443 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 00:20:46.184067 kubelet[2443]: E0712 00:20:46.184027 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:46.184172 kubelet[2443]: E0712 00:20:46.184099 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:46.184435 kubelet[2443]: I0712 00:20:46.184404 2443 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 00:20:46.192364 kubelet[2443]: I0712 00:20:46.192281 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.192261106 podStartE2EDuration="1.192261106s" podCreationTimestamp="2025-07-12 00:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:20:46.179657555 +0000 UTC m=+1.116571494" watchObservedRunningTime="2025-07-12 00:20:46.192261106 +0000 UTC m=+1.129175045" Jul 12 00:20:46.195694 kubelet[2443]: E0712 00:20:46.195651 2443 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 12 00:20:46.195869 kubelet[2443]: E0712 00:20:46.195850 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:46.201539 kubelet[2443]: I0712 00:20:46.201482 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.201467096 podStartE2EDuration="1.201467096s" podCreationTimestamp="2025-07-12 00:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:20:46.193689556 +0000 UTC m=+1.130603495" watchObservedRunningTime="2025-07-12 00:20:46.201467096 +0000 UTC m=+1.138381035" Jul 12 00:20:46.211416 kubelet[2443]: I0712 00:20:46.211342 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.211325804 podStartE2EDuration="1.211325804s" podCreationTimestamp="2025-07-12 00:20:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:20:46.202196092 +0000 UTC m=+1.139110031" watchObservedRunningTime="2025-07-12 00:20:46.211325804 +0000 UTC m=+1.148239743" Jul 12 00:20:47.185780 kubelet[2443]: E0712 00:20:47.185743 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:47.186105 kubelet[2443]: E0712 00:20:47.185787 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:47.186105 kubelet[2443]: E0712 00:20:47.185870 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:47.841368 sudo[1604]: pam_unix(sudo:session): session closed for user root Jul 12 00:20:47.842956 sshd[1601]: pam_unix(sshd:session): session closed for user core Jul 12 00:20:47.845995 systemd[1]: sshd@6-10.0.0.97:22-10.0.0.1:52594.service: Deactivated successfully. Jul 12 00:20:47.847635 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 00:20:47.848651 systemd[1]: session-7.scope: Consumed 8.095s CPU time, 153.3M memory peak, 0B memory swap peak. Jul 12 00:20:47.849330 systemd-logind[1410]: Session 7 logged out. Waiting for processes to exit. Jul 12 00:20:47.850233 systemd-logind[1410]: Removed session 7. Jul 12 00:20:48.186519 kubelet[2443]: E0712 00:20:48.186492 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:50.797890 kubelet[2443]: E0712 00:20:50.797853 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:51.192343 kubelet[2443]: E0712 00:20:51.192308 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:51.661700 kubelet[2443]: I0712 00:20:51.661672 2443 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 00:20:51.662064 containerd[1428]: time="2025-07-12T00:20:51.662009511Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 00:20:51.662340 kubelet[2443]: I0712 00:20:51.662220 2443 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 00:20:52.193366 kubelet[2443]: E0712 00:20:52.193322 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:52.567122 systemd[1]: Created slice kubepods-besteffort-podced9f038_788d_4239_a761_a2c0b5ad0aea.slice - libcontainer container kubepods-besteffort-podced9f038_788d_4239_a761_a2c0b5ad0aea.slice. Jul 12 00:20:52.578990 systemd[1]: Created slice kubepods-burstable-pod9e030b61_a17b_4d59_aee6_e78f62adddcb.slice - libcontainer container kubepods-burstable-pod9e030b61_a17b_4d59_aee6_e78f62adddcb.slice. Jul 12 00:20:52.587815 kubelet[2443]: I0712 00:20:52.587744 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ced9f038-788d-4239-a761-a2c0b5ad0aea-kube-proxy\") pod \"kube-proxy-xjwnc\" (UID: \"ced9f038-788d-4239-a761-a2c0b5ad0aea\") " pod="kube-system/kube-proxy-xjwnc" Jul 12 00:20:52.587815 kubelet[2443]: I0712 00:20:52.587783 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-hostproc\") pod \"cilium-q6b64\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " pod="kube-system/cilium-q6b64" Jul 12 00:20:52.587815 kubelet[2443]: I0712 00:20:52.587802 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-cilium-cgroup\") pod \"cilium-q6b64\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " pod="kube-system/cilium-q6b64" Jul 12 00:20:52.587815 kubelet[2443]: I0712 00:20:52.587818 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-etc-cni-netd\") pod \"cilium-q6b64\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " pod="kube-system/cilium-q6b64" Jul 12 00:20:52.588337 kubelet[2443]: I0712 00:20:52.587836 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-host-proc-sys-net\") pod \"cilium-q6b64\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " pod="kube-system/cilium-q6b64" Jul 12 00:20:52.588337 kubelet[2443]: I0712 00:20:52.587852 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-cilium-run\") pod \"cilium-q6b64\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " pod="kube-system/cilium-q6b64" Jul 12 00:20:52.588337 kubelet[2443]: I0712 00:20:52.587867 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccd72\" (UniqueName: \"kubernetes.io/projected/ced9f038-788d-4239-a761-a2c0b5ad0aea-kube-api-access-ccd72\") pod \"kube-proxy-xjwnc\" (UID: \"ced9f038-788d-4239-a761-a2c0b5ad0aea\") " pod="kube-system/kube-proxy-xjwnc" Jul 12 00:20:52.588337 kubelet[2443]: I0712 00:20:52.587881 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e030b61-a17b-4d59-aee6-e78f62adddcb-hubble-tls\") pod \"cilium-q6b64\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " pod="kube-system/cilium-q6b64" Jul 12 00:20:52.588337 kubelet[2443]: I0712 00:20:52.587895 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-xtables-lock\") pod \"cilium-q6b64\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " pod="kube-system/cilium-q6b64" Jul 12 00:20:52.588742 kubelet[2443]: I0712 00:20:52.587911 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e030b61-a17b-4d59-aee6-e78f62adddcb-cilium-config-path\") pod \"cilium-q6b64\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " pod="kube-system/cilium-q6b64" Jul 12 00:20:52.588742 kubelet[2443]: I0712 00:20:52.587930 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-bpf-maps\") pod \"cilium-q6b64\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " pod="kube-system/cilium-q6b64" Jul 12 00:20:52.588742 kubelet[2443]: I0712 00:20:52.587946 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e030b61-a17b-4d59-aee6-e78f62adddcb-clustermesh-secrets\") pod \"cilium-q6b64\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " pod="kube-system/cilium-q6b64" Jul 12 00:20:52.588742 kubelet[2443]: I0712 00:20:52.587963 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ced9f038-788d-4239-a761-a2c0b5ad0aea-xtables-lock\") pod \"kube-proxy-xjwnc\" (UID: \"ced9f038-788d-4239-a761-a2c0b5ad0aea\") " pod="kube-system/kube-proxy-xjwnc" Jul 12 00:20:52.588742 kubelet[2443]: I0712 00:20:52.587989 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-lib-modules\") pod \"cilium-q6b64\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " pod="kube-system/cilium-q6b64" Jul 12 00:20:52.588742 kubelet[2443]: I0712 00:20:52.588023 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-host-proc-sys-kernel\") pod \"cilium-q6b64\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " pod="kube-system/cilium-q6b64" Jul 12 00:20:52.589205 kubelet[2443]: I0712 00:20:52.588039 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6k5xx\" (UniqueName: \"kubernetes.io/projected/9e030b61-a17b-4d59-aee6-e78f62adddcb-kube-api-access-6k5xx\") pod \"cilium-q6b64\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " pod="kube-system/cilium-q6b64" Jul 12 00:20:52.589205 kubelet[2443]: I0712 00:20:52.588055 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ced9f038-788d-4239-a761-a2c0b5ad0aea-lib-modules\") pod \"kube-proxy-xjwnc\" (UID: \"ced9f038-788d-4239-a761-a2c0b5ad0aea\") " pod="kube-system/kube-proxy-xjwnc" Jul 12 00:20:52.589205 kubelet[2443]: I0712 00:20:52.588070 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-cni-path\") pod \"cilium-q6b64\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " pod="kube-system/cilium-q6b64" Jul 12 00:20:52.770698 systemd[1]: Created slice kubepods-besteffort-pod08bbdb7a_df6a_4344_b96e_f30a27e23373.slice - libcontainer container kubepods-besteffort-pod08bbdb7a_df6a_4344_b96e_f30a27e23373.slice. Jul 12 00:20:52.789978 kubelet[2443]: I0712 00:20:52.789687 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mcn95\" (UniqueName: \"kubernetes.io/projected/08bbdb7a-df6a-4344-b96e-f30a27e23373-kube-api-access-mcn95\") pod \"cilium-operator-6c4d7847fc-mxx7p\" (UID: \"08bbdb7a-df6a-4344-b96e-f30a27e23373\") " pod="kube-system/cilium-operator-6c4d7847fc-mxx7p" Jul 12 00:20:52.789978 kubelet[2443]: I0712 00:20:52.789732 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/08bbdb7a-df6a-4344-b96e-f30a27e23373-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-mxx7p\" (UID: \"08bbdb7a-df6a-4344-b96e-f30a27e23373\") " pod="kube-system/cilium-operator-6c4d7847fc-mxx7p" Jul 12 00:20:52.876945 kubelet[2443]: E0712 00:20:52.876822 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:52.877553 containerd[1428]: time="2025-07-12T00:20:52.877439721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xjwnc,Uid:ced9f038-788d-4239-a761-a2c0b5ad0aea,Namespace:kube-system,Attempt:0,}" Jul 12 00:20:52.881395 kubelet[2443]: E0712 00:20:52.881071 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:52.881564 containerd[1428]: time="2025-07-12T00:20:52.881486911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q6b64,Uid:9e030b61-a17b-4d59-aee6-e78f62adddcb,Namespace:kube-system,Attempt:0,}" Jul 12 00:20:52.905309 containerd[1428]: time="2025-07-12T00:20:52.905219226Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:20:52.905725 containerd[1428]: time="2025-07-12T00:20:52.905581860Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:20:52.905725 containerd[1428]: time="2025-07-12T00:20:52.905633477Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:52.905968 containerd[1428]: time="2025-07-12T00:20:52.905914948Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:52.912646 containerd[1428]: time="2025-07-12T00:20:52.912510814Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:20:52.912646 containerd[1428]: time="2025-07-12T00:20:52.912569187Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:20:52.912646 containerd[1428]: time="2025-07-12T00:20:52.912580622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:52.912884 containerd[1428]: time="2025-07-12T00:20:52.912683895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:52.923818 systemd[1]: Started cri-containerd-95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc.scope - libcontainer container 95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc. Jul 12 00:20:52.926322 systemd[1]: Started cri-containerd-7f91d0added9d18dfd06088a91bd5a4505d6ae444873beaa1aa1caf9d6f1ca96.scope - libcontainer container 7f91d0added9d18dfd06088a91bd5a4505d6ae444873beaa1aa1caf9d6f1ca96. Jul 12 00:20:52.946129 containerd[1428]: time="2025-07-12T00:20:52.946090428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-q6b64,Uid:9e030b61-a17b-4d59-aee6-e78f62adddcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc\"" Jul 12 00:20:52.946785 kubelet[2443]: E0712 00:20:52.946766 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:52.947874 containerd[1428]: time="2025-07-12T00:20:52.947850144Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 12 00:20:52.951036 containerd[1428]: time="2025-07-12T00:20:52.950623277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-xjwnc,Uid:ced9f038-788d-4239-a761-a2c0b5ad0aea,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f91d0added9d18dfd06088a91bd5a4505d6ae444873beaa1aa1caf9d6f1ca96\"" Jul 12 00:20:52.951891 kubelet[2443]: E0712 00:20:52.951741 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:52.955269 containerd[1428]: time="2025-07-12T00:20:52.954827475Z" level=info msg="CreateContainer within sandbox \"7f91d0added9d18dfd06088a91bd5a4505d6ae444873beaa1aa1caf9d6f1ca96\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 00:20:52.971409 containerd[1428]: time="2025-07-12T00:20:52.971351364Z" level=info msg="CreateContainer within sandbox \"7f91d0added9d18dfd06088a91bd5a4505d6ae444873beaa1aa1caf9d6f1ca96\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"020c08d5879862451658d49376360144f6713aa4b60dbbb099355cbc0f80fe3f\"" Jul 12 00:20:52.972046 containerd[1428]: time="2025-07-12T00:20:52.971996789Z" level=info msg="StartContainer for \"020c08d5879862451658d49376360144f6713aa4b60dbbb099355cbc0f80fe3f\"" Jul 12 00:20:52.997821 systemd[1]: Started cri-containerd-020c08d5879862451658d49376360144f6713aa4b60dbbb099355cbc0f80fe3f.scope - libcontainer container 020c08d5879862451658d49376360144f6713aa4b60dbbb099355cbc0f80fe3f. Jul 12 00:20:53.024244 containerd[1428]: time="2025-07-12T00:20:53.024130100Z" level=info msg="StartContainer for \"020c08d5879862451658d49376360144f6713aa4b60dbbb099355cbc0f80fe3f\" returns successfully" Jul 12 00:20:53.077353 kubelet[2443]: E0712 00:20:53.076872 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:53.077488 containerd[1428]: time="2025-07-12T00:20:53.077323311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mxx7p,Uid:08bbdb7a-df6a-4344-b96e-f30a27e23373,Namespace:kube-system,Attempt:0,}" Jul 12 00:20:53.102693 containerd[1428]: time="2025-07-12T00:20:53.102613115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:20:53.102693 containerd[1428]: time="2025-07-12T00:20:53.102679646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:20:53.102835 containerd[1428]: time="2025-07-12T00:20:53.102702276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:53.102835 containerd[1428]: time="2025-07-12T00:20:53.102792036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:20:53.121890 systemd[1]: Started cri-containerd-d7ad14e7a2501ce704c0a1509095abdff1a9b2cbc3635df4f1c1f06f5e3e9889.scope - libcontainer container d7ad14e7a2501ce704c0a1509095abdff1a9b2cbc3635df4f1c1f06f5e3e9889. Jul 12 00:20:53.150572 containerd[1428]: time="2025-07-12T00:20:53.150529863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-mxx7p,Uid:08bbdb7a-df6a-4344-b96e-f30a27e23373,Namespace:kube-system,Attempt:0,} returns sandbox id \"d7ad14e7a2501ce704c0a1509095abdff1a9b2cbc3635df4f1c1f06f5e3e9889\"" Jul 12 00:20:53.151215 kubelet[2443]: E0712 00:20:53.151194 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:53.202656 kubelet[2443]: E0712 00:20:53.202612 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:56.953360 kubelet[2443]: E0712 00:20:56.953303 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:20:56.961879 kubelet[2443]: I0712 00:20:56.961831 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-xjwnc" podStartSLOduration=4.961709352 podStartE2EDuration="4.961709352s" podCreationTimestamp="2025-07-12 00:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:20:53.213231265 +0000 UTC m=+8.150145204" watchObservedRunningTime="2025-07-12 00:20:56.961709352 +0000 UTC m=+11.898623251" Jul 12 00:20:58.061400 kubelet[2443]: E0712 00:20:58.061344 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:01.071034 update_engine[1412]: I20250712 00:21:01.070962 1412 update_attempter.cc:509] Updating boot flags... Jul 12 00:21:01.095684 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2824) Jul 12 00:21:01.608881 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount906546997.mount: Deactivated successfully. Jul 12 00:21:02.972681 containerd[1428]: time="2025-07-12T00:21:02.972422940Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:21:02.973784 containerd[1428]: time="2025-07-12T00:21:02.973111351Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 12 00:21:02.973784 containerd[1428]: time="2025-07-12T00:21:02.973567520Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:21:02.975377 containerd[1428]: time="2025-07-12T00:21:02.975233845Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.027348396s" Jul 12 00:21:02.975377 containerd[1428]: time="2025-07-12T00:21:02.975272752Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 12 00:21:02.978142 containerd[1428]: time="2025-07-12T00:21:02.977938466Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 12 00:21:02.981876 containerd[1428]: time="2025-07-12T00:21:02.981835369Z" level=info msg="CreateContainer within sandbox \"95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:21:03.006683 containerd[1428]: time="2025-07-12T00:21:03.006624506Z" level=info msg="CreateContainer within sandbox \"95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb\"" Jul 12 00:21:03.007202 containerd[1428]: time="2025-07-12T00:21:03.007161812Z" level=info msg="StartContainer for \"d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb\"" Jul 12 00:21:03.024079 systemd[1]: run-containerd-runc-k8s.io-d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb-runc.XmMr04.mount: Deactivated successfully. Jul 12 00:21:03.036827 systemd[1]: Started cri-containerd-d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb.scope - libcontainer container d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb. Jul 12 00:21:03.061315 containerd[1428]: time="2025-07-12T00:21:03.061276375Z" level=info msg="StartContainer for \"d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb\" returns successfully" Jul 12 00:21:03.095936 systemd[1]: cri-containerd-d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb.scope: Deactivated successfully. Jul 12 00:21:03.220460 kubelet[2443]: E0712 00:21:03.220421 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:03.227362 containerd[1428]: time="2025-07-12T00:21:03.223683001Z" level=info msg="shim disconnected" id=d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb namespace=k8s.io Jul 12 00:21:03.227362 containerd[1428]: time="2025-07-12T00:21:03.227217102Z" level=warning msg="cleaning up after shim disconnected" id=d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb namespace=k8s.io Jul 12 00:21:03.227362 containerd[1428]: time="2025-07-12T00:21:03.227231658Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:21:04.003369 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb-rootfs.mount: Deactivated successfully. Jul 12 00:21:04.142508 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2110719824.mount: Deactivated successfully. Jul 12 00:21:04.220946 kubelet[2443]: E0712 00:21:04.220806 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:04.224748 containerd[1428]: time="2025-07-12T00:21:04.224563695Z" level=info msg="CreateContainer within sandbox \"95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:21:04.252327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3473080128.mount: Deactivated successfully. Jul 12 00:21:04.258433 containerd[1428]: time="2025-07-12T00:21:04.257948393Z" level=info msg="CreateContainer within sandbox \"95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47\"" Jul 12 00:21:04.260213 containerd[1428]: time="2025-07-12T00:21:04.259294733Z" level=info msg="StartContainer for \"1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47\"" Jul 12 00:21:04.298820 systemd[1]: Started cri-containerd-1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47.scope - libcontainer container 1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47. Jul 12 00:21:04.323509 containerd[1428]: time="2025-07-12T00:21:04.323461982Z" level=info msg="StartContainer for \"1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47\" returns successfully" Jul 12 00:21:04.356015 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 00:21:04.356251 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:21:04.356330 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:21:04.367723 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 00:21:04.380093 systemd[1]: cri-containerd-1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47.scope: Deactivated successfully. Jul 12 00:21:04.383897 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 00:21:04.453867 containerd[1428]: time="2025-07-12T00:21:04.453798336Z" level=info msg="shim disconnected" id=1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47 namespace=k8s.io Jul 12 00:21:04.453867 containerd[1428]: time="2025-07-12T00:21:04.453860916Z" level=warning msg="cleaning up after shim disconnected" id=1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47 namespace=k8s.io Jul 12 00:21:04.453867 containerd[1428]: time="2025-07-12T00:21:04.453869394Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:21:04.492859 containerd[1428]: time="2025-07-12T00:21:04.492806599Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:21:04.494346 containerd[1428]: time="2025-07-12T00:21:04.494299773Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 12 00:21:04.495166 containerd[1428]: time="2025-07-12T00:21:04.495120317Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 00:21:04.496822 containerd[1428]: time="2025-07-12T00:21:04.496729054Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.518752162s" Jul 12 00:21:04.496822 containerd[1428]: time="2025-07-12T00:21:04.496769762Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 12 00:21:04.498557 containerd[1428]: time="2025-07-12T00:21:04.498455555Z" level=info msg="CreateContainer within sandbox \"d7ad14e7a2501ce704c0a1509095abdff1a9b2cbc3635df4f1c1f06f5e3e9889\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 12 00:21:04.516249 containerd[1428]: time="2025-07-12T00:21:04.516112204Z" level=info msg="CreateContainer within sandbox \"d7ad14e7a2501ce704c0a1509095abdff1a9b2cbc3635df4f1c1f06f5e3e9889\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da\"" Jul 12 00:21:04.516715 containerd[1428]: time="2025-07-12T00:21:04.516691583Z" level=info msg="StartContainer for \"c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da\"" Jul 12 00:21:04.542903 systemd[1]: Started cri-containerd-c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da.scope - libcontainer container c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da. Jul 12 00:21:04.573269 containerd[1428]: time="2025-07-12T00:21:04.573219137Z" level=info msg="StartContainer for \"c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da\" returns successfully" Jul 12 00:21:05.223301 kubelet[2443]: E0712 00:21:05.223066 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:05.225725 kubelet[2443]: E0712 00:21:05.225698 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:05.228346 containerd[1428]: time="2025-07-12T00:21:05.227987870Z" level=info msg="CreateContainer within sandbox \"95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:21:05.240562 kubelet[2443]: I0712 00:21:05.240050 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-mxx7p" podStartSLOduration=1.894347368 podStartE2EDuration="13.240031548s" podCreationTimestamp="2025-07-12 00:20:52 +0000 UTC" firstStartedPulling="2025-07-12 00:20:53.151653125 +0000 UTC m=+8.088567064" lastFinishedPulling="2025-07-12 00:21:04.497337305 +0000 UTC m=+19.434251244" observedRunningTime="2025-07-12 00:21:05.239993279 +0000 UTC m=+20.176907218" watchObservedRunningTime="2025-07-12 00:21:05.240031548 +0000 UTC m=+20.176945447" Jul 12 00:21:05.264085 containerd[1428]: time="2025-07-12T00:21:05.264028171Z" level=info msg="CreateContainer within sandbox \"95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292\"" Jul 12 00:21:05.264883 containerd[1428]: time="2025-07-12T00:21:05.264846603Z" level=info msg="StartContainer for \"9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292\"" Jul 12 00:21:05.296892 systemd[1]: Started cri-containerd-9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292.scope - libcontainer container 9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292. Jul 12 00:21:05.329026 containerd[1428]: time="2025-07-12T00:21:05.328883918Z" level=info msg="StartContainer for \"9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292\" returns successfully" Jul 12 00:21:05.345825 systemd[1]: cri-containerd-9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292.scope: Deactivated successfully. Jul 12 00:21:05.372088 containerd[1428]: time="2025-07-12T00:21:05.370174792Z" level=info msg="shim disconnected" id=9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292 namespace=k8s.io Jul 12 00:21:05.372088 containerd[1428]: time="2025-07-12T00:21:05.370227376Z" level=warning msg="cleaning up after shim disconnected" id=9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292 namespace=k8s.io Jul 12 00:21:05.372088 containerd[1428]: time="2025-07-12T00:21:05.370241091Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:21:06.004569 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292-rootfs.mount: Deactivated successfully. Jul 12 00:21:06.229782 kubelet[2443]: E0712 00:21:06.229098 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:06.229782 kubelet[2443]: E0712 00:21:06.229719 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:06.233600 containerd[1428]: time="2025-07-12T00:21:06.233400296Z" level=info msg="CreateContainer within sandbox \"95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:21:06.342585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2670539487.mount: Deactivated successfully. Jul 12 00:21:06.343405 containerd[1428]: time="2025-07-12T00:21:06.343354924Z" level=info msg="CreateContainer within sandbox \"95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93\"" Jul 12 00:21:06.344711 containerd[1428]: time="2025-07-12T00:21:06.344653624Z" level=info msg="StartContainer for \"c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93\"" Jul 12 00:21:06.374855 systemd[1]: Started cri-containerd-c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93.scope - libcontainer container c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93. Jul 12 00:21:06.394508 systemd[1]: cri-containerd-c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93.scope: Deactivated successfully. Jul 12 00:21:06.395770 containerd[1428]: time="2025-07-12T00:21:06.395713826Z" level=info msg="StartContainer for \"c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93\" returns successfully" Jul 12 00:21:06.416979 containerd[1428]: time="2025-07-12T00:21:06.416910056Z" level=info msg="shim disconnected" id=c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93 namespace=k8s.io Jul 12 00:21:06.416979 containerd[1428]: time="2025-07-12T00:21:06.416972958Z" level=warning msg="cleaning up after shim disconnected" id=c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93 namespace=k8s.io Jul 12 00:21:06.416979 containerd[1428]: time="2025-07-12T00:21:06.416981235Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:21:07.004632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93-rootfs.mount: Deactivated successfully. Jul 12 00:21:07.232870 kubelet[2443]: E0712 00:21:07.232816 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:07.235115 containerd[1428]: time="2025-07-12T00:21:07.235058719Z" level=info msg="CreateContainer within sandbox \"95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:21:07.249473 containerd[1428]: time="2025-07-12T00:21:07.249424722Z" level=info msg="CreateContainer within sandbox \"95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7\"" Jul 12 00:21:07.251058 containerd[1428]: time="2025-07-12T00:21:07.250866553Z" level=info msg="StartContainer for \"4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7\"" Jul 12 00:21:07.279849 systemd[1]: Started cri-containerd-4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7.scope - libcontainer container 4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7. Jul 12 00:21:07.311794 containerd[1428]: time="2025-07-12T00:21:07.311717403Z" level=info msg="StartContainer for \"4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7\" returns successfully" Jul 12 00:21:07.481335 kubelet[2443]: I0712 00:21:07.481292 2443 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 00:21:07.520433 systemd[1]: Created slice kubepods-burstable-pod93efc284_6d89_406a_8a6c_c3b81c8dbb04.slice - libcontainer container kubepods-burstable-pod93efc284_6d89_406a_8a6c_c3b81c8dbb04.slice. Jul 12 00:21:07.525642 systemd[1]: Created slice kubepods-burstable-pod30a1fe69_9143_4a45_8d74_16d48cdc0f78.slice - libcontainer container kubepods-burstable-pod30a1fe69_9143_4a45_8d74_16d48cdc0f78.slice. Jul 12 00:21:07.595701 kubelet[2443]: I0712 00:21:07.595565 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mctm9\" (UniqueName: \"kubernetes.io/projected/93efc284-6d89-406a-8a6c-c3b81c8dbb04-kube-api-access-mctm9\") pod \"coredns-668d6bf9bc-48kpg\" (UID: \"93efc284-6d89-406a-8a6c-c3b81c8dbb04\") " pod="kube-system/coredns-668d6bf9bc-48kpg" Jul 12 00:21:07.595701 kubelet[2443]: I0712 00:21:07.595614 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hfct\" (UniqueName: \"kubernetes.io/projected/30a1fe69-9143-4a45-8d74-16d48cdc0f78-kube-api-access-7hfct\") pod \"coredns-668d6bf9bc-zpxzg\" (UID: \"30a1fe69-9143-4a45-8d74-16d48cdc0f78\") " pod="kube-system/coredns-668d6bf9bc-zpxzg" Jul 12 00:21:07.595701 kubelet[2443]: I0712 00:21:07.595637 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/93efc284-6d89-406a-8a6c-c3b81c8dbb04-config-volume\") pod \"coredns-668d6bf9bc-48kpg\" (UID: \"93efc284-6d89-406a-8a6c-c3b81c8dbb04\") " pod="kube-system/coredns-668d6bf9bc-48kpg" Jul 12 00:21:07.595701 kubelet[2443]: I0712 00:21:07.595657 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/30a1fe69-9143-4a45-8d74-16d48cdc0f78-config-volume\") pod \"coredns-668d6bf9bc-zpxzg\" (UID: \"30a1fe69-9143-4a45-8d74-16d48cdc0f78\") " pod="kube-system/coredns-668d6bf9bc-zpxzg" Jul 12 00:21:07.823532 kubelet[2443]: E0712 00:21:07.823489 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:07.824487 containerd[1428]: time="2025-07-12T00:21:07.824433337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-48kpg,Uid:93efc284-6d89-406a-8a6c-c3b81c8dbb04,Namespace:kube-system,Attempt:0,}" Jul 12 00:21:07.828927 kubelet[2443]: E0712 00:21:07.828658 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:07.829552 containerd[1428]: time="2025-07-12T00:21:07.829512415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zpxzg,Uid:30a1fe69-9143-4a45-8d74-16d48cdc0f78,Namespace:kube-system,Attempt:0,}" Jul 12 00:21:08.238573 kubelet[2443]: E0712 00:21:08.238545 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:08.260062 kubelet[2443]: I0712 00:21:08.257385 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-q6b64" podStartSLOduration=6.227036228 podStartE2EDuration="16.2573675s" podCreationTimestamp="2025-07-12 00:20:52 +0000 UTC" firstStartedPulling="2025-07-12 00:20:52.947463561 +0000 UTC m=+7.884377500" lastFinishedPulling="2025-07-12 00:21:02.977794873 +0000 UTC m=+17.914708772" observedRunningTime="2025-07-12 00:21:08.257005879 +0000 UTC m=+23.193919818" watchObservedRunningTime="2025-07-12 00:21:08.2573675 +0000 UTC m=+23.194281399" Jul 12 00:21:09.246996 kubelet[2443]: E0712 00:21:09.246775 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:09.702654 systemd-networkd[1347]: cilium_host: Link UP Jul 12 00:21:09.702821 systemd-networkd[1347]: cilium_net: Link UP Jul 12 00:21:09.702962 systemd-networkd[1347]: cilium_net: Gained carrier Jul 12 00:21:09.703091 systemd-networkd[1347]: cilium_host: Gained carrier Jul 12 00:21:09.799734 systemd-networkd[1347]: cilium_vxlan: Link UP Jul 12 00:21:09.799747 systemd-networkd[1347]: cilium_vxlan: Gained carrier Jul 12 00:21:10.110724 kernel: NET: Registered PF_ALG protocol family Jul 12 00:21:10.184832 systemd-networkd[1347]: cilium_net: Gained IPv6LL Jul 12 00:21:10.247906 kubelet[2443]: E0712 00:21:10.247858 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:10.752811 systemd-networkd[1347]: cilium_host: Gained IPv6LL Jul 12 00:21:10.764768 systemd-networkd[1347]: lxc_health: Link UP Jul 12 00:21:10.772970 systemd-networkd[1347]: lxc_health: Gained carrier Jul 12 00:21:11.000471 systemd-networkd[1347]: lxc5c6100e8b8af: Link UP Jul 12 00:21:11.006017 systemd-networkd[1347]: lxc119e7f92c5f4: Link UP Jul 12 00:21:11.015747 kernel: eth0: renamed from tmpeb723 Jul 12 00:21:11.024064 kernel: eth0: renamed from tmpdfc09 Jul 12 00:21:11.028427 systemd-networkd[1347]: lxc5c6100e8b8af: Gained carrier Jul 12 00:21:11.029284 systemd-networkd[1347]: lxc119e7f92c5f4: Gained carrier Jul 12 00:21:11.250162 kubelet[2443]: E0712 00:21:11.250128 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:11.840901 systemd-networkd[1347]: cilium_vxlan: Gained IPv6LL Jul 12 00:21:12.251815 kubelet[2443]: E0712 00:21:12.251785 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:12.288893 systemd-networkd[1347]: lxc119e7f92c5f4: Gained IPv6LL Jul 12 00:21:12.417929 systemd-networkd[1347]: lxc_health: Gained IPv6LL Jul 12 00:21:13.058734 systemd-networkd[1347]: lxc5c6100e8b8af: Gained IPv6LL Jul 12 00:21:13.857386 systemd[1]: Started sshd@7-10.0.0.97:22-10.0.0.1:60130.service - OpenSSH per-connection server daemon (10.0.0.1:60130). Jul 12 00:21:13.900269 sshd[3677]: Accepted publickey for core from 10.0.0.1 port 60130 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:13.901336 sshd[3677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:13.910734 systemd-logind[1410]: New session 8 of user core. Jul 12 00:21:13.917860 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 00:21:14.060692 sshd[3677]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:14.064847 systemd[1]: sshd@7-10.0.0.97:22-10.0.0.1:60130.service: Deactivated successfully. Jul 12 00:21:14.066980 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 00:21:14.067656 systemd-logind[1410]: Session 8 logged out. Waiting for processes to exit. Jul 12 00:21:14.068891 systemd-logind[1410]: Removed session 8. Jul 12 00:21:14.664626 containerd[1428]: time="2025-07-12T00:21:14.664460018Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:21:14.664626 containerd[1428]: time="2025-07-12T00:21:14.664511646Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:21:14.664626 containerd[1428]: time="2025-07-12T00:21:14.664536321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:21:14.665039 containerd[1428]: time="2025-07-12T00:21:14.664625980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:21:14.676996 containerd[1428]: time="2025-07-12T00:21:14.676878796Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:21:14.676996 containerd[1428]: time="2025-07-12T00:21:14.676937183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:21:14.676996 containerd[1428]: time="2025-07-12T00:21:14.676951619Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:21:14.677222 containerd[1428]: time="2025-07-12T00:21:14.677078591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:21:14.687845 systemd[1]: Started cri-containerd-eb72326ca98a3148a6fb486aa1530fb76872ca6627e27d3162d81fdde4827654.scope - libcontainer container eb72326ca98a3148a6fb486aa1530fb76872ca6627e27d3162d81fdde4827654. Jul 12 00:21:14.691726 systemd[1]: Started cri-containerd-dfc09e0897cdfd410755232e275bb3776ff5c49f5ac6d71f2a4fe209d9954b6f.scope - libcontainer container dfc09e0897cdfd410755232e275bb3776ff5c49f5ac6d71f2a4fe209d9954b6f. Jul 12 00:21:14.699475 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:21:14.703799 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 00:21:14.722020 containerd[1428]: time="2025-07-12T00:21:14.721632467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-48kpg,Uid:93efc284-6d89-406a-8a6c-c3b81c8dbb04,Namespace:kube-system,Attempt:0,} returns sandbox id \"eb72326ca98a3148a6fb486aa1530fb76872ca6627e27d3162d81fdde4827654\"" Jul 12 00:21:14.722830 kubelet[2443]: E0712 00:21:14.722807 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:14.725153 containerd[1428]: time="2025-07-12T00:21:14.725099079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-zpxzg,Uid:30a1fe69-9143-4a45-8d74-16d48cdc0f78,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfc09e0897cdfd410755232e275bb3776ff5c49f5ac6d71f2a4fe209d9954b6f\"" Jul 12 00:21:14.726530 kubelet[2443]: E0712 00:21:14.726374 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:14.727960 containerd[1428]: time="2025-07-12T00:21:14.727857332Z" level=info msg="CreateContainer within sandbox \"eb72326ca98a3148a6fb486aa1530fb76872ca6627e27d3162d81fdde4827654\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:21:14.729152 containerd[1428]: time="2025-07-12T00:21:14.729120165Z" level=info msg="CreateContainer within sandbox \"dfc09e0897cdfd410755232e275bb3776ff5c49f5ac6d71f2a4fe209d9954b6f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 00:21:14.742910 containerd[1428]: time="2025-07-12T00:21:14.742854084Z" level=info msg="CreateContainer within sandbox \"dfc09e0897cdfd410755232e275bb3776ff5c49f5ac6d71f2a4fe209d9954b6f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e8e102025170bc9530fcd058295be22efdbb8b6d006b1a51d49367bc3d227677\"" Jul 12 00:21:14.744114 containerd[1428]: time="2025-07-12T00:21:14.743485461Z" level=info msg="StartContainer for \"e8e102025170bc9530fcd058295be22efdbb8b6d006b1a51d49367bc3d227677\"" Jul 12 00:21:14.755339 containerd[1428]: time="2025-07-12T00:21:14.755279861Z" level=info msg="CreateContainer within sandbox \"eb72326ca98a3148a6fb486aa1530fb76872ca6627e27d3162d81fdde4827654\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"17bddc897f1dfeab6e56b4cf5bdbb791b5e8da97cbf200c3cec257baaa03c8bb\"" Jul 12 00:21:14.756064 containerd[1428]: time="2025-07-12T00:21:14.755885883Z" level=info msg="StartContainer for \"17bddc897f1dfeab6e56b4cf5bdbb791b5e8da97cbf200c3cec257baaa03c8bb\"" Jul 12 00:21:14.771868 systemd[1]: Started cri-containerd-e8e102025170bc9530fcd058295be22efdbb8b6d006b1a51d49367bc3d227677.scope - libcontainer container e8e102025170bc9530fcd058295be22efdbb8b6d006b1a51d49367bc3d227677. Jul 12 00:21:14.776890 systemd[1]: Started cri-containerd-17bddc897f1dfeab6e56b4cf5bdbb791b5e8da97cbf200c3cec257baaa03c8bb.scope - libcontainer container 17bddc897f1dfeab6e56b4cf5bdbb791b5e8da97cbf200c3cec257baaa03c8bb. Jul 12 00:21:14.806544 containerd[1428]: time="2025-07-12T00:21:14.806500542Z" level=info msg="StartContainer for \"e8e102025170bc9530fcd058295be22efdbb8b6d006b1a51d49367bc3d227677\" returns successfully" Jul 12 00:21:14.806905 containerd[1428]: time="2025-07-12T00:21:14.806553090Z" level=info msg="StartContainer for \"17bddc897f1dfeab6e56b4cf5bdbb791b5e8da97cbf200c3cec257baaa03c8bb\" returns successfully" Jul 12 00:21:15.259287 kubelet[2443]: E0712 00:21:15.259171 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:15.267304 kubelet[2443]: E0712 00:21:15.267038 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:15.276315 kubelet[2443]: I0712 00:21:15.276241 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-zpxzg" podStartSLOduration=23.276217766 podStartE2EDuration="23.276217766s" podCreationTimestamp="2025-07-12 00:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:21:15.272624317 +0000 UTC m=+30.209538256" watchObservedRunningTime="2025-07-12 00:21:15.276217766 +0000 UTC m=+30.213131705" Jul 12 00:21:15.295811 kubelet[2443]: I0712 00:21:15.295739 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-48kpg" podStartSLOduration=23.295720833 podStartE2EDuration="23.295720833s" podCreationTimestamp="2025-07-12 00:20:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:21:15.285282091 +0000 UTC m=+30.222196030" watchObservedRunningTime="2025-07-12 00:21:15.295720833 +0000 UTC m=+30.232634772" Jul 12 00:21:16.269223 kubelet[2443]: E0712 00:21:16.269143 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:16.270273 kubelet[2443]: E0712 00:21:16.269230 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:17.271278 kubelet[2443]: E0712 00:21:17.271108 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:17.271278 kubelet[2443]: E0712 00:21:17.271199 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:19.071283 systemd[1]: Started sshd@8-10.0.0.97:22-10.0.0.1:60136.service - OpenSSH per-connection server daemon (10.0.0.1:60136). Jul 12 00:21:19.107549 sshd[3873]: Accepted publickey for core from 10.0.0.1 port 60136 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:19.109023 sshd[3873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:19.113409 systemd-logind[1410]: New session 9 of user core. Jul 12 00:21:19.122833 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 00:21:19.238510 sshd[3873]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:19.242123 systemd[1]: sshd@8-10.0.0.97:22-10.0.0.1:60136.service: Deactivated successfully. Jul 12 00:21:19.243888 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 00:21:19.244420 systemd-logind[1410]: Session 9 logged out. Waiting for processes to exit. Jul 12 00:21:19.245181 systemd-logind[1410]: Removed session 9. Jul 12 00:21:24.248392 systemd[1]: Started sshd@9-10.0.0.97:22-10.0.0.1:53238.service - OpenSSH per-connection server daemon (10.0.0.1:53238). Jul 12 00:21:24.286209 sshd[3890]: Accepted publickey for core from 10.0.0.1 port 53238 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:24.287909 sshd[3890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:24.295134 systemd-logind[1410]: New session 10 of user core. Jul 12 00:21:24.304827 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 00:21:24.422114 sshd[3890]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:24.425395 systemd[1]: sshd@9-10.0.0.97:22-10.0.0.1:53238.service: Deactivated successfully. Jul 12 00:21:24.427020 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 00:21:24.428211 systemd-logind[1410]: Session 10 logged out. Waiting for processes to exit. Jul 12 00:21:24.429120 systemd-logind[1410]: Removed session 10. Jul 12 00:21:29.435558 systemd[1]: Started sshd@10-10.0.0.97:22-10.0.0.1:53246.service - OpenSSH per-connection server daemon (10.0.0.1:53246). Jul 12 00:21:29.473195 sshd[3905]: Accepted publickey for core from 10.0.0.1 port 53246 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:29.474630 sshd[3905]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:29.478756 systemd-logind[1410]: New session 11 of user core. Jul 12 00:21:29.488886 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 00:21:29.607348 sshd[3905]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:29.621998 systemd[1]: sshd@10-10.0.0.97:22-10.0.0.1:53246.service: Deactivated successfully. Jul 12 00:21:29.623943 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 00:21:29.625485 systemd-logind[1410]: Session 11 logged out. Waiting for processes to exit. Jul 12 00:21:29.627544 systemd[1]: Started sshd@11-10.0.0.97:22-10.0.0.1:53258.service - OpenSSH per-connection server daemon (10.0.0.1:53258). Jul 12 00:21:29.628402 systemd-logind[1410]: Removed session 11. Jul 12 00:21:29.662837 sshd[3920]: Accepted publickey for core from 10.0.0.1 port 53258 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:29.664389 sshd[3920]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:29.669759 systemd-logind[1410]: New session 12 of user core. Jul 12 00:21:29.682510 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 00:21:29.856399 sshd[3920]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:29.873142 systemd[1]: sshd@11-10.0.0.97:22-10.0.0.1:53258.service: Deactivated successfully. Jul 12 00:21:29.877628 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 00:21:29.880460 systemd-logind[1410]: Session 12 logged out. Waiting for processes to exit. Jul 12 00:21:29.889633 systemd[1]: Started sshd@12-10.0.0.97:22-10.0.0.1:53264.service - OpenSSH per-connection server daemon (10.0.0.1:53264). Jul 12 00:21:29.892229 systemd-logind[1410]: Removed session 12. Jul 12 00:21:29.925806 sshd[3933]: Accepted publickey for core from 10.0.0.1 port 53264 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:29.927257 sshd[3933]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:29.932105 systemd-logind[1410]: New session 13 of user core. Jul 12 00:21:29.943345 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 00:21:30.064883 sshd[3933]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:30.068497 systemd[1]: sshd@12-10.0.0.97:22-10.0.0.1:53264.service: Deactivated successfully. Jul 12 00:21:30.070346 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 00:21:30.070945 systemd-logind[1410]: Session 13 logged out. Waiting for processes to exit. Jul 12 00:21:30.072017 systemd-logind[1410]: Removed session 13. Jul 12 00:21:35.085986 systemd[1]: Started sshd@13-10.0.0.97:22-10.0.0.1:40456.service - OpenSSH per-connection server daemon (10.0.0.1:40456). Jul 12 00:21:35.116510 sshd[3948]: Accepted publickey for core from 10.0.0.1 port 40456 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:35.117966 sshd[3948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:35.124093 systemd-logind[1410]: New session 14 of user core. Jul 12 00:21:35.129909 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 00:21:35.251974 sshd[3948]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:35.255292 systemd[1]: sshd@13-10.0.0.97:22-10.0.0.1:40456.service: Deactivated successfully. Jul 12 00:21:35.257280 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 00:21:35.258774 systemd-logind[1410]: Session 14 logged out. Waiting for processes to exit. Jul 12 00:21:35.261318 systemd-logind[1410]: Removed session 14. Jul 12 00:21:40.262117 systemd[1]: Started sshd@14-10.0.0.97:22-10.0.0.1:40466.service - OpenSSH per-connection server daemon (10.0.0.1:40466). Jul 12 00:21:40.308268 sshd[3963]: Accepted publickey for core from 10.0.0.1 port 40466 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:40.309829 sshd[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:40.314295 systemd-logind[1410]: New session 15 of user core. Jul 12 00:21:40.325884 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 00:21:40.445397 sshd[3963]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:40.457705 systemd[1]: sshd@14-10.0.0.97:22-10.0.0.1:40466.service: Deactivated successfully. Jul 12 00:21:40.459547 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 00:21:40.465775 systemd-logind[1410]: Session 15 logged out. Waiting for processes to exit. Jul 12 00:21:40.473143 systemd[1]: Started sshd@15-10.0.0.97:22-10.0.0.1:40468.service - OpenSSH per-connection server daemon (10.0.0.1:40468). Jul 12 00:21:40.476917 systemd-logind[1410]: Removed session 15. Jul 12 00:21:40.504753 sshd[3978]: Accepted publickey for core from 10.0.0.1 port 40468 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:40.506268 sshd[3978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:40.510095 systemd-logind[1410]: New session 16 of user core. Jul 12 00:21:40.517885 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 00:21:40.770644 sshd[3978]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:40.778472 systemd[1]: sshd@15-10.0.0.97:22-10.0.0.1:40468.service: Deactivated successfully. Jul 12 00:21:40.780874 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 00:21:40.783595 systemd-logind[1410]: Session 16 logged out. Waiting for processes to exit. Jul 12 00:21:40.789055 systemd[1]: Started sshd@16-10.0.0.97:22-10.0.0.1:40484.service - OpenSSH per-connection server daemon (10.0.0.1:40484). Jul 12 00:21:40.790948 systemd-logind[1410]: Removed session 16. Jul 12 00:21:40.836166 sshd[3990]: Accepted publickey for core from 10.0.0.1 port 40484 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:40.837971 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:40.844130 systemd-logind[1410]: New session 17 of user core. Jul 12 00:21:40.859104 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 00:21:41.464421 sshd[3990]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:41.478303 systemd[1]: sshd@16-10.0.0.97:22-10.0.0.1:40484.service: Deactivated successfully. Jul 12 00:21:41.482082 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 00:21:41.489910 systemd-logind[1410]: Session 17 logged out. Waiting for processes to exit. Jul 12 00:21:41.495155 systemd[1]: Started sshd@17-10.0.0.97:22-10.0.0.1:40494.service - OpenSSH per-connection server daemon (10.0.0.1:40494). Jul 12 00:21:41.496872 systemd-logind[1410]: Removed session 17. Jul 12 00:21:41.529075 sshd[4009]: Accepted publickey for core from 10.0.0.1 port 40494 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:41.530548 sshd[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:41.534313 systemd-logind[1410]: New session 18 of user core. Jul 12 00:21:41.540879 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 00:21:41.800769 sshd[4009]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:41.811043 systemd[1]: sshd@17-10.0.0.97:22-10.0.0.1:40494.service: Deactivated successfully. Jul 12 00:21:41.814131 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 00:21:41.816651 systemd-logind[1410]: Session 18 logged out. Waiting for processes to exit. Jul 12 00:21:41.823740 systemd[1]: Started sshd@18-10.0.0.97:22-10.0.0.1:40502.service - OpenSSH per-connection server daemon (10.0.0.1:40502). Jul 12 00:21:41.826352 systemd-logind[1410]: Removed session 18. Jul 12 00:21:41.857279 sshd[4022]: Accepted publickey for core from 10.0.0.1 port 40502 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:41.858869 sshd[4022]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:41.863747 systemd-logind[1410]: New session 19 of user core. Jul 12 00:21:41.877876 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 00:21:42.003656 sshd[4022]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:42.009170 systemd[1]: sshd@18-10.0.0.97:22-10.0.0.1:40502.service: Deactivated successfully. Jul 12 00:21:42.011195 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 00:21:42.014865 systemd-logind[1410]: Session 19 logged out. Waiting for processes to exit. Jul 12 00:21:42.016325 systemd-logind[1410]: Removed session 19. Jul 12 00:21:47.013382 systemd[1]: Started sshd@19-10.0.0.97:22-10.0.0.1:46094.service - OpenSSH per-connection server daemon (10.0.0.1:46094). Jul 12 00:21:47.046216 sshd[4040]: Accepted publickey for core from 10.0.0.1 port 46094 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:47.047389 sshd[4040]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:47.050911 systemd-logind[1410]: New session 20 of user core. Jul 12 00:21:47.062879 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 00:21:47.170063 sshd[4040]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:47.173966 systemd[1]: sshd@19-10.0.0.97:22-10.0.0.1:46094.service: Deactivated successfully. Jul 12 00:21:47.175581 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 00:21:47.178707 systemd-logind[1410]: Session 20 logged out. Waiting for processes to exit. Jul 12 00:21:47.179442 systemd-logind[1410]: Removed session 20. Jul 12 00:21:52.183034 systemd[1]: Started sshd@20-10.0.0.97:22-10.0.0.1:46100.service - OpenSSH per-connection server daemon (10.0.0.1:46100). Jul 12 00:21:52.218433 sshd[4055]: Accepted publickey for core from 10.0.0.1 port 46100 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:52.219634 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:52.223118 systemd-logind[1410]: New session 21 of user core. Jul 12 00:21:52.238854 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 00:21:52.345753 sshd[4055]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:52.348790 systemd[1]: sshd@20-10.0.0.97:22-10.0.0.1:46100.service: Deactivated successfully. Jul 12 00:21:52.350802 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 00:21:52.352267 systemd-logind[1410]: Session 21 logged out. Waiting for processes to exit. Jul 12 00:21:52.353531 systemd-logind[1410]: Removed session 21. Jul 12 00:21:56.175501 kubelet[2443]: E0712 00:21:56.175450 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:21:57.357926 systemd[1]: Started sshd@21-10.0.0.97:22-10.0.0.1:38308.service - OpenSSH per-connection server daemon (10.0.0.1:38308). Jul 12 00:21:57.393478 sshd[4071]: Accepted publickey for core from 10.0.0.1 port 38308 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:57.394782 sshd[4071]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:57.398223 systemd-logind[1410]: New session 22 of user core. Jul 12 00:21:57.406926 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 00:21:57.523366 sshd[4071]: pam_unix(sshd:session): session closed for user core Jul 12 00:21:57.538948 systemd[1]: sshd@21-10.0.0.97:22-10.0.0.1:38308.service: Deactivated successfully. Jul 12 00:21:57.541097 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 00:21:57.542376 systemd-logind[1410]: Session 22 logged out. Waiting for processes to exit. Jul 12 00:21:57.547939 systemd[1]: Started sshd@22-10.0.0.97:22-10.0.0.1:38314.service - OpenSSH per-connection server daemon (10.0.0.1:38314). Jul 12 00:21:57.548779 systemd-logind[1410]: Removed session 22. Jul 12 00:21:57.579073 sshd[4085]: Accepted publickey for core from 10.0.0.1 port 38314 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:21:57.580456 sshd[4085]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:21:57.584496 systemd-logind[1410]: New session 23 of user core. Jul 12 00:21:57.588816 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 00:22:00.115532 containerd[1428]: time="2025-07-12T00:22:00.115471715Z" level=info msg="StopContainer for \"c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da\" with timeout 30 (s)" Jul 12 00:22:00.116025 containerd[1428]: time="2025-07-12T00:22:00.115995327Z" level=info msg="Stop container \"c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da\" with signal terminated" Jul 12 00:22:00.128613 systemd[1]: cri-containerd-c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da.scope: Deactivated successfully. Jul 12 00:22:00.150097 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da-rootfs.mount: Deactivated successfully. Jul 12 00:22:00.152372 systemd[1]: run-containerd-runc-k8s.io-4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7-runc.SSB3yB.mount: Deactivated successfully. Jul 12 00:22:00.155871 containerd[1428]: time="2025-07-12T00:22:00.155811067Z" level=info msg="shim disconnected" id=c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da namespace=k8s.io Jul 12 00:22:00.155871 containerd[1428]: time="2025-07-12T00:22:00.155863145Z" level=warning msg="cleaning up after shim disconnected" id=c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da namespace=k8s.io Jul 12 00:22:00.155871 containerd[1428]: time="2025-07-12T00:22:00.155876824Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:22:00.175951 containerd[1428]: time="2025-07-12T00:22:00.175829572Z" level=info msg="StopContainer for \"4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7\" with timeout 2 (s)" Jul 12 00:22:00.176991 containerd[1428]: time="2025-07-12T00:22:00.176208112Z" level=info msg="Stop container \"4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7\" with signal terminated" Jul 12 00:22:00.181943 systemd-networkd[1347]: lxc_health: Link DOWN Jul 12 00:22:00.182333 systemd-networkd[1347]: lxc_health: Lost carrier Jul 12 00:22:00.204911 containerd[1428]: time="2025-07-12T00:22:00.204799004Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 00:22:00.215477 systemd[1]: cri-containerd-4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7.scope: Deactivated successfully. Jul 12 00:22:00.215810 systemd[1]: cri-containerd-4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7.scope: Consumed 6.936s CPU time. Jul 12 00:22:00.222115 containerd[1428]: time="2025-07-12T00:22:00.218753148Z" level=info msg="StopContainer for \"c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da\" returns successfully" Jul 12 00:22:00.222115 containerd[1428]: time="2025-07-12T00:22:00.219340277Z" level=info msg="StopPodSandbox for \"d7ad14e7a2501ce704c0a1509095abdff1a9b2cbc3635df4f1c1f06f5e3e9889\"" Jul 12 00:22:00.222115 containerd[1428]: time="2025-07-12T00:22:00.219376635Z" level=info msg="Container to stop \"c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:22:00.221259 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7ad14e7a2501ce704c0a1509095abdff1a9b2cbc3635df4f1c1f06f5e3e9889-shm.mount: Deactivated successfully. Jul 12 00:22:00.222456 kubelet[2443]: E0712 00:22:00.221072 2443 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:22:00.234510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7-rootfs.mount: Deactivated successfully. Jul 12 00:22:00.236004 systemd[1]: cri-containerd-d7ad14e7a2501ce704c0a1509095abdff1a9b2cbc3635df4f1c1f06f5e3e9889.scope: Deactivated successfully. Jul 12 00:22:00.242314 containerd[1428]: time="2025-07-12T00:22:00.242228590Z" level=info msg="shim disconnected" id=4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7 namespace=k8s.io Jul 12 00:22:00.242314 containerd[1428]: time="2025-07-12T00:22:00.242300186Z" level=warning msg="cleaning up after shim disconnected" id=4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7 namespace=k8s.io Jul 12 00:22:00.242314 containerd[1428]: time="2025-07-12T00:22:00.242312705Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:22:00.257284 containerd[1428]: time="2025-07-12T00:22:00.257233078Z" level=info msg="StopContainer for \"4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7\" returns successfully" Jul 12 00:22:00.258044 containerd[1428]: time="2025-07-12T00:22:00.257994078Z" level=info msg="StopPodSandbox for \"95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc\"" Jul 12 00:22:00.258091 containerd[1428]: time="2025-07-12T00:22:00.258044476Z" level=info msg="Container to stop \"d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:22:00.258214 containerd[1428]: time="2025-07-12T00:22:00.258059715Z" level=info msg="Container to stop \"9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:22:00.258247 containerd[1428]: time="2025-07-12T00:22:00.258216187Z" level=info msg="Container to stop \"4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:22:00.258247 containerd[1428]: time="2025-07-12T00:22:00.258235226Z" level=info msg="Container to stop \"1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:22:00.258299 containerd[1428]: time="2025-07-12T00:22:00.258245945Z" level=info msg="Container to stop \"c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 00:22:00.264491 systemd[1]: cri-containerd-95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc.scope: Deactivated successfully. Jul 12 00:22:00.268970 containerd[1428]: time="2025-07-12T00:22:00.268396210Z" level=info msg="shim disconnected" id=d7ad14e7a2501ce704c0a1509095abdff1a9b2cbc3635df4f1c1f06f5e3e9889 namespace=k8s.io Jul 12 00:22:00.268970 containerd[1428]: time="2025-07-12T00:22:00.268970179Z" level=warning msg="cleaning up after shim disconnected" id=d7ad14e7a2501ce704c0a1509095abdff1a9b2cbc3635df4f1c1f06f5e3e9889 namespace=k8s.io Jul 12 00:22:00.269172 containerd[1428]: time="2025-07-12T00:22:00.268983739Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:22:00.281863 containerd[1428]: time="2025-07-12T00:22:00.281808662Z" level=info msg="TearDown network for sandbox \"d7ad14e7a2501ce704c0a1509095abdff1a9b2cbc3635df4f1c1f06f5e3e9889\" successfully" Jul 12 00:22:00.281863 containerd[1428]: time="2025-07-12T00:22:00.281845460Z" level=info msg="StopPodSandbox for \"d7ad14e7a2501ce704c0a1509095abdff1a9b2cbc3635df4f1c1f06f5e3e9889\" returns successfully" Jul 12 00:22:00.298743 containerd[1428]: time="2025-07-12T00:22:00.298683772Z" level=info msg="shim disconnected" id=95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc namespace=k8s.io Jul 12 00:22:00.300051 containerd[1428]: time="2025-07-12T00:22:00.299258382Z" level=warning msg="cleaning up after shim disconnected" id=95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc namespace=k8s.io Jul 12 00:22:00.300051 containerd[1428]: time="2025-07-12T00:22:00.299282941Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:22:00.324913 kubelet[2443]: I0712 00:22:00.324863 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mcn95\" (UniqueName: \"kubernetes.io/projected/08bbdb7a-df6a-4344-b96e-f30a27e23373-kube-api-access-mcn95\") pod \"08bbdb7a-df6a-4344-b96e-f30a27e23373\" (UID: \"08bbdb7a-df6a-4344-b96e-f30a27e23373\") " Jul 12 00:22:00.324913 kubelet[2443]: I0712 00:22:00.324919 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/08bbdb7a-df6a-4344-b96e-f30a27e23373-cilium-config-path\") pod \"08bbdb7a-df6a-4344-b96e-f30a27e23373\" (UID: \"08bbdb7a-df6a-4344-b96e-f30a27e23373\") " Jul 12 00:22:00.329526 kubelet[2443]: I0712 00:22:00.329472 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08bbdb7a-df6a-4344-b96e-f30a27e23373-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "08bbdb7a-df6a-4344-b96e-f30a27e23373" (UID: "08bbdb7a-df6a-4344-b96e-f30a27e23373"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:22:00.342571 kubelet[2443]: I0712 00:22:00.342498 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08bbdb7a-df6a-4344-b96e-f30a27e23373-kube-api-access-mcn95" (OuterVolumeSpecName: "kube-api-access-mcn95") pod "08bbdb7a-df6a-4344-b96e-f30a27e23373" (UID: "08bbdb7a-df6a-4344-b96e-f30a27e23373"). InnerVolumeSpecName "kube-api-access-mcn95". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:22:00.346774 containerd[1428]: time="2025-07-12T00:22:00.344150735Z" level=info msg="TearDown network for sandbox \"95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc\" successfully" Jul 12 00:22:00.346774 containerd[1428]: time="2025-07-12T00:22:00.344184653Z" level=info msg="StopPodSandbox for \"95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc\" returns successfully" Jul 12 00:22:00.362059 kubelet[2443]: I0712 00:22:00.362018 2443 scope.go:117] "RemoveContainer" containerID="c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da" Jul 12 00:22:00.364160 containerd[1428]: time="2025-07-12T00:22:00.364101922Z" level=info msg="RemoveContainer for \"c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da\"" Jul 12 00:22:00.368043 containerd[1428]: time="2025-07-12T00:22:00.367948879Z" level=info msg="RemoveContainer for \"c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da\" returns successfully" Jul 12 00:22:00.368317 kubelet[2443]: I0712 00:22:00.368286 2443 scope.go:117] "RemoveContainer" containerID="c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da" Jul 12 00:22:00.368559 containerd[1428]: time="2025-07-12T00:22:00.368515930Z" level=error msg="ContainerStatus for \"c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da\": not found" Jul 12 00:22:00.371245 systemd[1]: Removed slice kubepods-besteffort-pod08bbdb7a_df6a_4344_b96e_f30a27e23373.slice - libcontainer container kubepods-besteffort-pod08bbdb7a_df6a_4344_b96e_f30a27e23373.slice. Jul 12 00:22:00.377988 kubelet[2443]: E0712 00:22:00.377939 2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da\": not found" containerID="c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da" Jul 12 00:22:00.384724 kubelet[2443]: I0712 00:22:00.384399 2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da"} err="failed to get container status \"c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da\": rpc error: code = NotFound desc = an error occurred when try to find container \"c22d20bcd81ab8901b2a910d8c436e03e21cb76017ac00642f76e448a5e484da\": not found" Jul 12 00:22:00.384724 kubelet[2443]: I0712 00:22:00.384514 2443 scope.go:117] "RemoveContainer" containerID="4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7" Jul 12 00:22:00.386226 containerd[1428]: time="2025-07-12T00:22:00.386178078Z" level=info msg="RemoveContainer for \"4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7\"" Jul 12 00:22:00.388837 containerd[1428]: time="2025-07-12T00:22:00.388806219Z" level=info msg="RemoveContainer for \"4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7\" returns successfully" Jul 12 00:22:00.389107 kubelet[2443]: I0712 00:22:00.388997 2443 scope.go:117] "RemoveContainer" containerID="c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93" Jul 12 00:22:00.390470 containerd[1428]: time="2025-07-12T00:22:00.390135429Z" level=info msg="RemoveContainer for \"c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93\"" Jul 12 00:22:00.405204 containerd[1428]: time="2025-07-12T00:22:00.405158037Z" level=info msg="RemoveContainer for \"c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93\" returns successfully" Jul 12 00:22:00.405641 kubelet[2443]: I0712 00:22:00.405593 2443 scope.go:117] "RemoveContainer" containerID="9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292" Jul 12 00:22:00.406689 containerd[1428]: time="2025-07-12T00:22:00.406652318Z" level=info msg="RemoveContainer for \"9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292\"" Jul 12 00:22:00.409110 containerd[1428]: time="2025-07-12T00:22:00.409076350Z" level=info msg="RemoveContainer for \"9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292\" returns successfully" Jul 12 00:22:00.409332 kubelet[2443]: I0712 00:22:00.409301 2443 scope.go:117] "RemoveContainer" containerID="1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47" Jul 12 00:22:00.410763 containerd[1428]: time="2025-07-12T00:22:00.410727703Z" level=info msg="RemoveContainer for \"1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47\"" Jul 12 00:22:00.414395 containerd[1428]: time="2025-07-12T00:22:00.414357952Z" level=info msg="RemoveContainer for \"1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47\" returns successfully" Jul 12 00:22:00.414593 kubelet[2443]: I0712 00:22:00.414565 2443 scope.go:117] "RemoveContainer" containerID="d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb" Jul 12 00:22:00.415640 containerd[1428]: time="2025-07-12T00:22:00.415609246Z" level=info msg="RemoveContainer for \"d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb\"" Jul 12 00:22:00.417831 containerd[1428]: time="2025-07-12T00:22:00.417801610Z" level=info msg="RemoveContainer for \"d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb\" returns successfully" Jul 12 00:22:00.418058 kubelet[2443]: I0712 00:22:00.417980 2443 scope.go:117] "RemoveContainer" containerID="4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7" Jul 12 00:22:00.418229 containerd[1428]: time="2025-07-12T00:22:00.418196309Z" level=error msg="ContainerStatus for \"4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7\": not found" Jul 12 00:22:00.418335 kubelet[2443]: E0712 00:22:00.418316 2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7\": not found" containerID="4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7" Jul 12 00:22:00.418372 kubelet[2443]: I0712 00:22:00.418342 2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7"} err="failed to get container status \"4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a8f6c16a67c732033cb557dc5c8ba8422202bd81304c03d7b4a6e3189ef7da7\": not found" Jul 12 00:22:00.418372 kubelet[2443]: I0712 00:22:00.418362 2443 scope.go:117] "RemoveContainer" containerID="c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93" Jul 12 00:22:00.418567 containerd[1428]: time="2025-07-12T00:22:00.418523252Z" level=error msg="ContainerStatus for \"c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93\": not found" Jul 12 00:22:00.418839 kubelet[2443]: E0712 00:22:00.418707 2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93\": not found" containerID="c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93" Jul 12 00:22:00.418839 kubelet[2443]: I0712 00:22:00.418738 2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93"} err="failed to get container status \"c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93\": rpc error: code = NotFound desc = an error occurred when try to find container \"c085cab260ba8564cb3070abdc98829268cbb1443916a9205612443ed6703a93\": not found" Jul 12 00:22:00.418839 kubelet[2443]: I0712 00:22:00.418756 2443 scope.go:117] "RemoveContainer" containerID="9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292" Jul 12 00:22:00.419160 containerd[1428]: time="2025-07-12T00:22:00.419088702Z" level=error msg="ContainerStatus for \"9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292\": not found" Jul 12 00:22:00.419239 kubelet[2443]: E0712 00:22:00.419216 2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292\": not found" containerID="9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292" Jul 12 00:22:00.419277 kubelet[2443]: I0712 00:22:00.419243 2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292"} err="failed to get container status \"9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292\": rpc error: code = NotFound desc = an error occurred when try to find container \"9b82862341fe5a2c0ec6da81927943c370b4c7971fd8ea9e2952225dc0566292\": not found" Jul 12 00:22:00.419277 kubelet[2443]: I0712 00:22:00.419262 2443 scope.go:117] "RemoveContainer" containerID="1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47" Jul 12 00:22:00.419483 containerd[1428]: time="2025-07-12T00:22:00.419450523Z" level=error msg="ContainerStatus for \"1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47\": not found" Jul 12 00:22:00.419631 kubelet[2443]: E0712 00:22:00.419611 2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47\": not found" containerID="1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47" Jul 12 00:22:00.419712 kubelet[2443]: I0712 00:22:00.419636 2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47"} err="failed to get container status \"1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47\": rpc error: code = NotFound desc = an error occurred when try to find container \"1fe5283fd41bc578f2620fd5583a89302c4cff390e515e7d315a02984e100e47\": not found" Jul 12 00:22:00.419712 kubelet[2443]: I0712 00:22:00.419656 2443 scope.go:117] "RemoveContainer" containerID="d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb" Jul 12 00:22:00.420096 containerd[1428]: time="2025-07-12T00:22:00.419900660Z" level=error msg="ContainerStatus for \"d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb\": not found" Jul 12 00:22:00.420156 kubelet[2443]: E0712 00:22:00.420042 2443 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb\": not found" containerID="d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb" Jul 12 00:22:00.420156 kubelet[2443]: I0712 00:22:00.420069 2443 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb"} err="failed to get container status \"d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb\": rpc error: code = NotFound desc = an error occurred when try to find container \"d03ec80b2a20a40a684b0b6b9621eb296c7aef97d303933cffef290b586bfafb\": not found" Jul 12 00:22:00.425471 kubelet[2443]: I0712 00:22:00.425437 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-lib-modules\") pod \"9e030b61-a17b-4d59-aee6-e78f62adddcb\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " Jul 12 00:22:00.425536 kubelet[2443]: I0712 00:22:00.425482 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6k5xx\" (UniqueName: \"kubernetes.io/projected/9e030b61-a17b-4d59-aee6-e78f62adddcb-kube-api-access-6k5xx\") pod \"9e030b61-a17b-4d59-aee6-e78f62adddcb\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " Jul 12 00:22:00.425536 kubelet[2443]: I0712 00:22:00.425527 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-host-proc-sys-net\") pod \"9e030b61-a17b-4d59-aee6-e78f62adddcb\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " Jul 12 00:22:00.425590 kubelet[2443]: I0712 00:22:00.425560 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e030b61-a17b-4d59-aee6-e78f62adddcb-hubble-tls\") pod \"9e030b61-a17b-4d59-aee6-e78f62adddcb\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " Jul 12 00:22:00.425590 kubelet[2443]: I0712 00:22:00.425559 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9e030b61-a17b-4d59-aee6-e78f62adddcb" (UID: "9e030b61-a17b-4d59-aee6-e78f62adddcb"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:22:00.425590 kubelet[2443]: I0712 00:22:00.425577 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e030b61-a17b-4d59-aee6-e78f62adddcb-cilium-config-path\") pod \"9e030b61-a17b-4d59-aee6-e78f62adddcb\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " Jul 12 00:22:00.425656 kubelet[2443]: I0712 00:22:00.425605 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-hostproc\") pod \"9e030b61-a17b-4d59-aee6-e78f62adddcb\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " Jul 12 00:22:00.425656 kubelet[2443]: I0712 00:22:00.425620 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-cni-path\") pod \"9e030b61-a17b-4d59-aee6-e78f62adddcb\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " Jul 12 00:22:00.425656 kubelet[2443]: I0712 00:22:00.425619 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9e030b61-a17b-4d59-aee6-e78f62adddcb" (UID: "9e030b61-a17b-4d59-aee6-e78f62adddcb"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:22:00.425656 kubelet[2443]: I0712 00:22:00.425638 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-bpf-maps\") pod \"9e030b61-a17b-4d59-aee6-e78f62adddcb\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " Jul 12 00:22:00.425765 kubelet[2443]: I0712 00:22:00.425656 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-cilium-cgroup\") pod \"9e030b61-a17b-4d59-aee6-e78f62adddcb\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " Jul 12 00:22:00.425765 kubelet[2443]: I0712 00:22:00.425695 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-xtables-lock\") pod \"9e030b61-a17b-4d59-aee6-e78f62adddcb\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " Jul 12 00:22:00.425765 kubelet[2443]: I0712 00:22:00.425709 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-host-proc-sys-kernel\") pod \"9e030b61-a17b-4d59-aee6-e78f62adddcb\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " Jul 12 00:22:00.425765 kubelet[2443]: I0712 00:22:00.425725 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-cilium-run\") pod \"9e030b61-a17b-4d59-aee6-e78f62adddcb\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " Jul 12 00:22:00.425765 kubelet[2443]: I0712 00:22:00.425746 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e030b61-a17b-4d59-aee6-e78f62adddcb-clustermesh-secrets\") pod \"9e030b61-a17b-4d59-aee6-e78f62adddcb\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " Jul 12 00:22:00.425880 kubelet[2443]: I0712 00:22:00.425771 2443 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-etc-cni-netd\") pod \"9e030b61-a17b-4d59-aee6-e78f62adddcb\" (UID: \"9e030b61-a17b-4d59-aee6-e78f62adddcb\") " Jul 12 00:22:00.425880 kubelet[2443]: I0712 00:22:00.425805 2443 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mcn95\" (UniqueName: \"kubernetes.io/projected/08bbdb7a-df6a-4344-b96e-f30a27e23373-kube-api-access-mcn95\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.425880 kubelet[2443]: I0712 00:22:00.425816 2443 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/08bbdb7a-df6a-4344-b96e-f30a27e23373-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.425880 kubelet[2443]: I0712 00:22:00.425825 2443 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.425880 kubelet[2443]: I0712 00:22:00.425844 2443 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.425880 kubelet[2443]: I0712 00:22:00.425873 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9e030b61-a17b-4d59-aee6-e78f62adddcb" (UID: "9e030b61-a17b-4d59-aee6-e78f62adddcb"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:22:00.426011 kubelet[2443]: I0712 00:22:00.425894 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-hostproc" (OuterVolumeSpecName: "hostproc") pod "9e030b61-a17b-4d59-aee6-e78f62adddcb" (UID: "9e030b61-a17b-4d59-aee6-e78f62adddcb"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:22:00.426011 kubelet[2443]: I0712 00:22:00.425921 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-cni-path" (OuterVolumeSpecName: "cni-path") pod "9e030b61-a17b-4d59-aee6-e78f62adddcb" (UID: "9e030b61-a17b-4d59-aee6-e78f62adddcb"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:22:00.426011 kubelet[2443]: I0712 00:22:00.425936 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9e030b61-a17b-4d59-aee6-e78f62adddcb" (UID: "9e030b61-a17b-4d59-aee6-e78f62adddcb"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:22:00.426011 kubelet[2443]: I0712 00:22:00.425958 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9e030b61-a17b-4d59-aee6-e78f62adddcb" (UID: "9e030b61-a17b-4d59-aee6-e78f62adddcb"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:22:00.426011 kubelet[2443]: I0712 00:22:00.425973 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9e030b61-a17b-4d59-aee6-e78f62adddcb" (UID: "9e030b61-a17b-4d59-aee6-e78f62adddcb"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:22:00.426116 kubelet[2443]: I0712 00:22:00.426000 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9e030b61-a17b-4d59-aee6-e78f62adddcb" (UID: "9e030b61-a17b-4d59-aee6-e78f62adddcb"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:22:00.426116 kubelet[2443]: I0712 00:22:00.426016 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9e030b61-a17b-4d59-aee6-e78f62adddcb" (UID: "9e030b61-a17b-4d59-aee6-e78f62adddcb"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 00:22:00.427756 kubelet[2443]: I0712 00:22:00.427594 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e030b61-a17b-4d59-aee6-e78f62adddcb-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9e030b61-a17b-4d59-aee6-e78f62adddcb" (UID: "9e030b61-a17b-4d59-aee6-e78f62adddcb"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 00:22:00.428411 kubelet[2443]: I0712 00:22:00.428383 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e030b61-a17b-4d59-aee6-e78f62adddcb-kube-api-access-6k5xx" (OuterVolumeSpecName: "kube-api-access-6k5xx") pod "9e030b61-a17b-4d59-aee6-e78f62adddcb" (UID: "9e030b61-a17b-4d59-aee6-e78f62adddcb"). InnerVolumeSpecName "kube-api-access-6k5xx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:22:00.428686 kubelet[2443]: I0712 00:22:00.428549 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e030b61-a17b-4d59-aee6-e78f62adddcb-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9e030b61-a17b-4d59-aee6-e78f62adddcb" (UID: "9e030b61-a17b-4d59-aee6-e78f62adddcb"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 00:22:00.428686 kubelet[2443]: I0712 00:22:00.428617 2443 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e030b61-a17b-4d59-aee6-e78f62adddcb-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9e030b61-a17b-4d59-aee6-e78f62adddcb" (UID: "9e030b61-a17b-4d59-aee6-e78f62adddcb"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 00:22:00.526419 kubelet[2443]: I0712 00:22:00.526371 2443 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.526419 kubelet[2443]: I0712 00:22:00.526407 2443 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9e030b61-a17b-4d59-aee6-e78f62adddcb-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.526419 kubelet[2443]: I0712 00:22:00.526420 2443 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9e030b61-a17b-4d59-aee6-e78f62adddcb-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.526419 kubelet[2443]: I0712 00:22:00.526429 2443 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.526645 kubelet[2443]: I0712 00:22:00.526438 2443 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.526645 kubelet[2443]: I0712 00:22:00.526446 2443 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.526645 kubelet[2443]: I0712 00:22:00.526454 2443 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.526645 kubelet[2443]: I0712 00:22:00.526463 2443 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.526645 kubelet[2443]: I0712 00:22:00.526472 2443 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.526645 kubelet[2443]: I0712 00:22:00.526479 2443 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9e030b61-a17b-4d59-aee6-e78f62adddcb-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.526645 kubelet[2443]: I0712 00:22:00.526492 2443 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9e030b61-a17b-4d59-aee6-e78f62adddcb-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.526645 kubelet[2443]: I0712 00:22:00.526501 2443 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6k5xx\" (UniqueName: \"kubernetes.io/projected/9e030b61-a17b-4d59-aee6-e78f62adddcb-kube-api-access-6k5xx\") on node \"localhost\" DevicePath \"\"" Jul 12 00:22:00.669460 systemd[1]: Removed slice kubepods-burstable-pod9e030b61_a17b_4d59_aee6_e78f62adddcb.slice - libcontainer container kubepods-burstable-pod9e030b61_a17b_4d59_aee6_e78f62adddcb.slice. Jul 12 00:22:00.669565 systemd[1]: kubepods-burstable-pod9e030b61_a17b_4d59_aee6_e78f62adddcb.slice: Consumed 7.079s CPU time. Jul 12 00:22:01.147122 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d7ad14e7a2501ce704c0a1509095abdff1a9b2cbc3635df4f1c1f06f5e3e9889-rootfs.mount: Deactivated successfully. Jul 12 00:22:01.147231 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc-rootfs.mount: Deactivated successfully. Jul 12 00:22:01.147285 systemd[1]: var-lib-kubelet-pods-08bbdb7a\x2ddf6a\x2d4344\x2db96e\x2df30a27e23373-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmcn95.mount: Deactivated successfully. Jul 12 00:22:01.147339 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-95e08c82d210ce871280ee68493ee72200046c97a5d4589187952096df799dcc-shm.mount: Deactivated successfully. Jul 12 00:22:01.147393 systemd[1]: var-lib-kubelet-pods-9e030b61\x2da17b\x2d4d59\x2daee6\x2de78f62adddcb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6k5xx.mount: Deactivated successfully. Jul 12 00:22:01.147444 systemd[1]: var-lib-kubelet-pods-9e030b61\x2da17b\x2d4d59\x2daee6\x2de78f62adddcb-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 00:22:01.147500 systemd[1]: var-lib-kubelet-pods-9e030b61\x2da17b\x2d4d59\x2daee6\x2de78f62adddcb-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 00:22:01.176959 kubelet[2443]: I0712 00:22:01.176917 2443 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08bbdb7a-df6a-4344-b96e-f30a27e23373" path="/var/lib/kubelet/pods/08bbdb7a-df6a-4344-b96e-f30a27e23373/volumes" Jul 12 00:22:01.177328 kubelet[2443]: I0712 00:22:01.177301 2443 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e030b61-a17b-4d59-aee6-e78f62adddcb" path="/var/lib/kubelet/pods/9e030b61-a17b-4d59-aee6-e78f62adddcb/volumes" Jul 12 00:22:02.071514 sshd[4085]: pam_unix(sshd:session): session closed for user core Jul 12 00:22:02.080512 systemd[1]: sshd@22-10.0.0.97:22-10.0.0.1:38314.service: Deactivated successfully. Jul 12 00:22:02.082403 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 00:22:02.082571 systemd[1]: session-23.scope: Consumed 1.855s CPU time. Jul 12 00:22:02.083725 systemd-logind[1410]: Session 23 logged out. Waiting for processes to exit. Jul 12 00:22:02.093038 systemd[1]: Started sshd@23-10.0.0.97:22-10.0.0.1:38320.service - OpenSSH per-connection server daemon (10.0.0.1:38320). Jul 12 00:22:02.094129 systemd-logind[1410]: Removed session 23. Jul 12 00:22:02.130499 sshd[4246]: Accepted publickey for core from 10.0.0.1 port 38320 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:22:02.132821 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:22:02.137167 systemd-logind[1410]: New session 24 of user core. Jul 12 00:22:02.147879 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 00:22:03.125213 sshd[4246]: pam_unix(sshd:session): session closed for user core Jul 12 00:22:03.135022 systemd[1]: sshd@23-10.0.0.97:22-10.0.0.1:38320.service: Deactivated successfully. Jul 12 00:22:03.138319 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 00:22:03.138482 kubelet[2443]: I0712 00:22:03.138395 2443 memory_manager.go:355] "RemoveStaleState removing state" podUID="08bbdb7a-df6a-4344-b96e-f30a27e23373" containerName="cilium-operator" Jul 12 00:22:03.138482 kubelet[2443]: I0712 00:22:03.138419 2443 memory_manager.go:355] "RemoveStaleState removing state" podUID="9e030b61-a17b-4d59-aee6-e78f62adddcb" containerName="cilium-agent" Jul 12 00:22:03.142759 systemd-logind[1410]: Session 24 logged out. Waiting for processes to exit. Jul 12 00:22:03.151120 systemd[1]: Started sshd@24-10.0.0.97:22-10.0.0.1:59036.service - OpenSSH per-connection server daemon (10.0.0.1:59036). Jul 12 00:22:03.153168 systemd-logind[1410]: Removed session 24. Jul 12 00:22:03.167179 systemd[1]: Created slice kubepods-burstable-podbcfef2c0_444d_4994_96b7_d4e5924476b3.slice - libcontainer container kubepods-burstable-podbcfef2c0_444d_4994_96b7_d4e5924476b3.slice. Jul 12 00:22:03.192006 sshd[4259]: Accepted publickey for core from 10.0.0.1 port 59036 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:22:03.192893 sshd[4259]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:22:03.196512 systemd-logind[1410]: New session 25 of user core. Jul 12 00:22:03.205825 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 12 00:22:03.246039 kubelet[2443]: I0712 00:22:03.245996 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/bcfef2c0-444d-4994-96b7-d4e5924476b3-cni-path\") pod \"cilium-zht8l\" (UID: \"bcfef2c0-444d-4994-96b7-d4e5924476b3\") " pod="kube-system/cilium-zht8l" Jul 12 00:22:03.246039 kubelet[2443]: I0712 00:22:03.246044 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bcfef2c0-444d-4994-96b7-d4e5924476b3-lib-modules\") pod \"cilium-zht8l\" (UID: \"bcfef2c0-444d-4994-96b7-d4e5924476b3\") " pod="kube-system/cilium-zht8l" Jul 12 00:22:03.246172 kubelet[2443]: I0712 00:22:03.246062 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/bcfef2c0-444d-4994-96b7-d4e5924476b3-clustermesh-secrets\") pod \"cilium-zht8l\" (UID: \"bcfef2c0-444d-4994-96b7-d4e5924476b3\") " pod="kube-system/cilium-zht8l" Jul 12 00:22:03.246172 kubelet[2443]: I0712 00:22:03.246080 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/bcfef2c0-444d-4994-96b7-d4e5924476b3-cilium-ipsec-secrets\") pod \"cilium-zht8l\" (UID: \"bcfef2c0-444d-4994-96b7-d4e5924476b3\") " pod="kube-system/cilium-zht8l" Jul 12 00:22:03.246172 kubelet[2443]: I0712 00:22:03.246097 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/bcfef2c0-444d-4994-96b7-d4e5924476b3-xtables-lock\") pod \"cilium-zht8l\" (UID: \"bcfef2c0-444d-4994-96b7-d4e5924476b3\") " pod="kube-system/cilium-zht8l" Jul 12 00:22:03.246172 kubelet[2443]: I0712 00:22:03.246114 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/bcfef2c0-444d-4994-96b7-d4e5924476b3-cilium-run\") pod \"cilium-zht8l\" (UID: \"bcfef2c0-444d-4994-96b7-d4e5924476b3\") " pod="kube-system/cilium-zht8l" Jul 12 00:22:03.246172 kubelet[2443]: I0712 00:22:03.246141 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/bcfef2c0-444d-4994-96b7-d4e5924476b3-cilium-cgroup\") pod \"cilium-zht8l\" (UID: \"bcfef2c0-444d-4994-96b7-d4e5924476b3\") " pod="kube-system/cilium-zht8l" Jul 12 00:22:03.246172 kubelet[2443]: I0712 00:22:03.246161 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/bcfef2c0-444d-4994-96b7-d4e5924476b3-etc-cni-netd\") pod \"cilium-zht8l\" (UID: \"bcfef2c0-444d-4994-96b7-d4e5924476b3\") " pod="kube-system/cilium-zht8l" Jul 12 00:22:03.246296 kubelet[2443]: I0712 00:22:03.246183 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/bcfef2c0-444d-4994-96b7-d4e5924476b3-host-proc-sys-net\") pod \"cilium-zht8l\" (UID: \"bcfef2c0-444d-4994-96b7-d4e5924476b3\") " pod="kube-system/cilium-zht8l" Jul 12 00:22:03.246296 kubelet[2443]: I0712 00:22:03.246244 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvz99\" (UniqueName: \"kubernetes.io/projected/bcfef2c0-444d-4994-96b7-d4e5924476b3-kube-api-access-pvz99\") pod \"cilium-zht8l\" (UID: \"bcfef2c0-444d-4994-96b7-d4e5924476b3\") " pod="kube-system/cilium-zht8l" Jul 12 00:22:03.246296 kubelet[2443]: I0712 00:22:03.246286 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/bcfef2c0-444d-4994-96b7-d4e5924476b3-hostproc\") pod \"cilium-zht8l\" (UID: \"bcfef2c0-444d-4994-96b7-d4e5924476b3\") " pod="kube-system/cilium-zht8l" Jul 12 00:22:03.246362 kubelet[2443]: I0712 00:22:03.246313 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/bcfef2c0-444d-4994-96b7-d4e5924476b3-cilium-config-path\") pod \"cilium-zht8l\" (UID: \"bcfef2c0-444d-4994-96b7-d4e5924476b3\") " pod="kube-system/cilium-zht8l" Jul 12 00:22:03.246362 kubelet[2443]: I0712 00:22:03.246345 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/bcfef2c0-444d-4994-96b7-d4e5924476b3-bpf-maps\") pod \"cilium-zht8l\" (UID: \"bcfef2c0-444d-4994-96b7-d4e5924476b3\") " pod="kube-system/cilium-zht8l" Jul 12 00:22:03.246413 kubelet[2443]: I0712 00:22:03.246363 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/bcfef2c0-444d-4994-96b7-d4e5924476b3-host-proc-sys-kernel\") pod \"cilium-zht8l\" (UID: \"bcfef2c0-444d-4994-96b7-d4e5924476b3\") " pod="kube-system/cilium-zht8l" Jul 12 00:22:03.246413 kubelet[2443]: I0712 00:22:03.246400 2443 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/bcfef2c0-444d-4994-96b7-d4e5924476b3-hubble-tls\") pod \"cilium-zht8l\" (UID: \"bcfef2c0-444d-4994-96b7-d4e5924476b3\") " pod="kube-system/cilium-zht8l" Jul 12 00:22:03.255179 sshd[4259]: pam_unix(sshd:session): session closed for user core Jul 12 00:22:03.264235 systemd[1]: sshd@24-10.0.0.97:22-10.0.0.1:59036.service: Deactivated successfully. Jul 12 00:22:03.266589 systemd[1]: session-25.scope: Deactivated successfully. Jul 12 00:22:03.268074 systemd-logind[1410]: Session 25 logged out. Waiting for processes to exit. Jul 12 00:22:03.276975 systemd[1]: Started sshd@25-10.0.0.97:22-10.0.0.1:59044.service - OpenSSH per-connection server daemon (10.0.0.1:59044). Jul 12 00:22:03.277934 systemd-logind[1410]: Removed session 25. Jul 12 00:22:03.306104 sshd[4267]: Accepted publickey for core from 10.0.0.1 port 59044 ssh2: RSA SHA256:OQQn8rJodojt66LnxxVdgk0ZA2OqvboSVT5RIqmX+EU Jul 12 00:22:03.307380 sshd[4267]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 00:22:03.310977 systemd-logind[1410]: New session 26 of user core. Jul 12 00:22:03.325835 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 12 00:22:03.471801 kubelet[2443]: E0712 00:22:03.471735 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:03.472693 containerd[1428]: time="2025-07-12T00:22:03.472324665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zht8l,Uid:bcfef2c0-444d-4994-96b7-d4e5924476b3,Namespace:kube-system,Attempt:0,}" Jul 12 00:22:03.490711 containerd[1428]: time="2025-07-12T00:22:03.490604909Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 12 00:22:03.490876 containerd[1428]: time="2025-07-12T00:22:03.490726303Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 12 00:22:03.490876 containerd[1428]: time="2025-07-12T00:22:03.490743622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:22:03.490876 containerd[1428]: time="2025-07-12T00:22:03.490849777Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 12 00:22:03.509854 systemd[1]: Started cri-containerd-85fc35e4714e371feb437547a9de38b1806a965d5fdb0598b13923441beb314b.scope - libcontainer container 85fc35e4714e371feb437547a9de38b1806a965d5fdb0598b13923441beb314b. Jul 12 00:22:03.541437 containerd[1428]: time="2025-07-12T00:22:03.541395234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zht8l,Uid:bcfef2c0-444d-4994-96b7-d4e5924476b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"85fc35e4714e371feb437547a9de38b1806a965d5fdb0598b13923441beb314b\"" Jul 12 00:22:03.542247 kubelet[2443]: E0712 00:22:03.542213 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:03.548570 containerd[1428]: time="2025-07-12T00:22:03.548505693Z" level=info msg="CreateContainer within sandbox \"85fc35e4714e371feb437547a9de38b1806a965d5fdb0598b13923441beb314b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 00:22:03.562442 containerd[1428]: time="2025-07-12T00:22:03.562386747Z" level=info msg="CreateContainer within sandbox \"85fc35e4714e371feb437547a9de38b1806a965d5fdb0598b13923441beb314b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c1410ede16a642d29a104e471a33ac6eb99ab8689f03c01189b94fc998618a95\"" Jul 12 00:22:03.562964 containerd[1428]: time="2025-07-12T00:22:03.562911362Z" level=info msg="StartContainer for \"c1410ede16a642d29a104e471a33ac6eb99ab8689f03c01189b94fc998618a95\"" Jul 12 00:22:03.587852 systemd[1]: Started cri-containerd-c1410ede16a642d29a104e471a33ac6eb99ab8689f03c01189b94fc998618a95.scope - libcontainer container c1410ede16a642d29a104e471a33ac6eb99ab8689f03c01189b94fc998618a95. Jul 12 00:22:03.610164 containerd[1428]: time="2025-07-12T00:22:03.610100259Z" level=info msg="StartContainer for \"c1410ede16a642d29a104e471a33ac6eb99ab8689f03c01189b94fc998618a95\" returns successfully" Jul 12 00:22:03.622031 systemd[1]: cri-containerd-c1410ede16a642d29a104e471a33ac6eb99ab8689f03c01189b94fc998618a95.scope: Deactivated successfully. Jul 12 00:22:03.650510 containerd[1428]: time="2025-07-12T00:22:03.650437725Z" level=info msg="shim disconnected" id=c1410ede16a642d29a104e471a33ac6eb99ab8689f03c01189b94fc998618a95 namespace=k8s.io Jul 12 00:22:03.650510 containerd[1428]: time="2025-07-12T00:22:03.650492523Z" level=warning msg="cleaning up after shim disconnected" id=c1410ede16a642d29a104e471a33ac6eb99ab8689f03c01189b94fc998618a95 namespace=k8s.io Jul 12 00:22:03.650510 containerd[1428]: time="2025-07-12T00:22:03.650501322Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:22:04.383136 kubelet[2443]: E0712 00:22:04.383039 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:04.391763 containerd[1428]: time="2025-07-12T00:22:04.390632815Z" level=info msg="CreateContainer within sandbox \"85fc35e4714e371feb437547a9de38b1806a965d5fdb0598b13923441beb314b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 00:22:04.405786 containerd[1428]: time="2025-07-12T00:22:04.405739113Z" level=info msg="CreateContainer within sandbox \"85fc35e4714e371feb437547a9de38b1806a965d5fdb0598b13923441beb314b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"49a4a5b0aaeb67e07d0c60b9691506127f4256d93437a8919ce7cff08005aa35\"" Jul 12 00:22:04.406278 containerd[1428]: time="2025-07-12T00:22:04.406244970Z" level=info msg="StartContainer for \"49a4a5b0aaeb67e07d0c60b9691506127f4256d93437a8919ce7cff08005aa35\"" Jul 12 00:22:04.434819 systemd[1]: Started cri-containerd-49a4a5b0aaeb67e07d0c60b9691506127f4256d93437a8919ce7cff08005aa35.scope - libcontainer container 49a4a5b0aaeb67e07d0c60b9691506127f4256d93437a8919ce7cff08005aa35. Jul 12 00:22:04.454491 containerd[1428]: time="2025-07-12T00:22:04.454447771Z" level=info msg="StartContainer for \"49a4a5b0aaeb67e07d0c60b9691506127f4256d93437a8919ce7cff08005aa35\" returns successfully" Jul 12 00:22:04.469822 systemd[1]: cri-containerd-49a4a5b0aaeb67e07d0c60b9691506127f4256d93437a8919ce7cff08005aa35.scope: Deactivated successfully. Jul 12 00:22:04.489069 containerd[1428]: time="2025-07-12T00:22:04.489007285Z" level=info msg="shim disconnected" id=49a4a5b0aaeb67e07d0c60b9691506127f4256d93437a8919ce7cff08005aa35 namespace=k8s.io Jul 12 00:22:04.489069 containerd[1428]: time="2025-07-12T00:22:04.489061803Z" level=warning msg="cleaning up after shim disconnected" id=49a4a5b0aaeb67e07d0c60b9691506127f4256d93437a8919ce7cff08005aa35 namespace=k8s.io Jul 12 00:22:04.489069 containerd[1428]: time="2025-07-12T00:22:04.489072562Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:22:05.221847 kubelet[2443]: E0712 00:22:05.221801 2443 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 00:22:05.352040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49a4a5b0aaeb67e07d0c60b9691506127f4256d93437a8919ce7cff08005aa35-rootfs.mount: Deactivated successfully. Jul 12 00:22:05.386550 kubelet[2443]: E0712 00:22:05.386511 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:05.388650 containerd[1428]: time="2025-07-12T00:22:05.388610183Z" level=info msg="CreateContainer within sandbox \"85fc35e4714e371feb437547a9de38b1806a965d5fdb0598b13923441beb314b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 00:22:05.402853 containerd[1428]: time="2025-07-12T00:22:05.402800585Z" level=info msg="CreateContainer within sandbox \"85fc35e4714e371feb437547a9de38b1806a965d5fdb0598b13923441beb314b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6ac250d1afb5190438c7cb5c0d9fdd2b8430616bd395ba5abebbf08648338803\"" Jul 12 00:22:05.403570 containerd[1428]: time="2025-07-12T00:22:05.403542471Z" level=info msg="StartContainer for \"6ac250d1afb5190438c7cb5c0d9fdd2b8430616bd395ba5abebbf08648338803\"" Jul 12 00:22:05.430865 systemd[1]: Started cri-containerd-6ac250d1afb5190438c7cb5c0d9fdd2b8430616bd395ba5abebbf08648338803.scope - libcontainer container 6ac250d1afb5190438c7cb5c0d9fdd2b8430616bd395ba5abebbf08648338803. Jul 12 00:22:05.458517 systemd[1]: cri-containerd-6ac250d1afb5190438c7cb5c0d9fdd2b8430616bd395ba5abebbf08648338803.scope: Deactivated successfully. Jul 12 00:22:05.459221 containerd[1428]: time="2025-07-12T00:22:05.459097652Z" level=info msg="StartContainer for \"6ac250d1afb5190438c7cb5c0d9fdd2b8430616bd395ba5abebbf08648338803\" returns successfully" Jul 12 00:22:05.484943 containerd[1428]: time="2025-07-12T00:22:05.484798655Z" level=info msg="shim disconnected" id=6ac250d1afb5190438c7cb5c0d9fdd2b8430616bd395ba5abebbf08648338803 namespace=k8s.io Jul 12 00:22:05.484943 containerd[1428]: time="2025-07-12T00:22:05.484865732Z" level=warning msg="cleaning up after shim disconnected" id=6ac250d1afb5190438c7cb5c0d9fdd2b8430616bd395ba5abebbf08648338803 namespace=k8s.io Jul 12 00:22:05.484943 containerd[1428]: time="2025-07-12T00:22:05.484876892Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:22:06.175291 kubelet[2443]: E0712 00:22:06.175249 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:06.351752 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6ac250d1afb5190438c7cb5c0d9fdd2b8430616bd395ba5abebbf08648338803-rootfs.mount: Deactivated successfully. Jul 12 00:22:06.390725 kubelet[2443]: E0712 00:22:06.390557 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:06.394153 containerd[1428]: time="2025-07-12T00:22:06.393651308Z" level=info msg="CreateContainer within sandbox \"85fc35e4714e371feb437547a9de38b1806a965d5fdb0598b13923441beb314b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 00:22:06.409237 containerd[1428]: time="2025-07-12T00:22:06.409181591Z" level=info msg="CreateContainer within sandbox \"85fc35e4714e371feb437547a9de38b1806a965d5fdb0598b13923441beb314b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7a95364d47f7dadcdf2a461940c2a4c6893a651c73f3933481b712baaa66dbf8\"" Jul 12 00:22:06.409276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1764283837.mount: Deactivated successfully. Jul 12 00:22:06.409764 containerd[1428]: time="2025-07-12T00:22:06.409731927Z" level=info msg="StartContainer for \"7a95364d47f7dadcdf2a461940c2a4c6893a651c73f3933481b712baaa66dbf8\"" Jul 12 00:22:06.434882 systemd[1]: Started cri-containerd-7a95364d47f7dadcdf2a461940c2a4c6893a651c73f3933481b712baaa66dbf8.scope - libcontainer container 7a95364d47f7dadcdf2a461940c2a4c6893a651c73f3933481b712baaa66dbf8. Jul 12 00:22:06.455224 systemd[1]: cri-containerd-7a95364d47f7dadcdf2a461940c2a4c6893a651c73f3933481b712baaa66dbf8.scope: Deactivated successfully. Jul 12 00:22:06.457078 containerd[1428]: time="2025-07-12T00:22:06.456941429Z" level=info msg="StartContainer for \"7a95364d47f7dadcdf2a461940c2a4c6893a651c73f3933481b712baaa66dbf8\" returns successfully" Jul 12 00:22:06.474135 containerd[1428]: time="2025-07-12T00:22:06.474078082Z" level=info msg="shim disconnected" id=7a95364d47f7dadcdf2a461940c2a4c6893a651c73f3933481b712baaa66dbf8 namespace=k8s.io Jul 12 00:22:06.474135 containerd[1428]: time="2025-07-12T00:22:06.474134560Z" level=warning msg="cleaning up after shim disconnected" id=7a95364d47f7dadcdf2a461940c2a4c6893a651c73f3933481b712baaa66dbf8 namespace=k8s.io Jul 12 00:22:06.474319 containerd[1428]: time="2025-07-12T00:22:06.474144039Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 00:22:07.052535 kubelet[2443]: I0712 00:22:07.052475 2443 setters.go:602] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-12T00:22:07Z","lastTransitionTime":"2025-07-12T00:22:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 12 00:22:07.351731 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a95364d47f7dadcdf2a461940c2a4c6893a651c73f3933481b712baaa66dbf8-rootfs.mount: Deactivated successfully. Jul 12 00:22:07.394591 kubelet[2443]: E0712 00:22:07.394567 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:07.396681 containerd[1428]: time="2025-07-12T00:22:07.396545087Z" level=info msg="CreateContainer within sandbox \"85fc35e4714e371feb437547a9de38b1806a965d5fdb0598b13923441beb314b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 00:22:07.407640 containerd[1428]: time="2025-07-12T00:22:07.407592301Z" level=info msg="CreateContainer within sandbox \"85fc35e4714e371feb437547a9de38b1806a965d5fdb0598b13923441beb314b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"89cd5fd1aa1e635f69926d7fcd2198c918e91fab14bc3d1b7f7f0e909cf83d7f\"" Jul 12 00:22:07.407920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4141998604.mount: Deactivated successfully. Jul 12 00:22:07.409575 containerd[1428]: time="2025-07-12T00:22:07.408330629Z" level=info msg="StartContainer for \"89cd5fd1aa1e635f69926d7fcd2198c918e91fab14bc3d1b7f7f0e909cf83d7f\"" Jul 12 00:22:07.438838 systemd[1]: Started cri-containerd-89cd5fd1aa1e635f69926d7fcd2198c918e91fab14bc3d1b7f7f0e909cf83d7f.scope - libcontainer container 89cd5fd1aa1e635f69926d7fcd2198c918e91fab14bc3d1b7f7f0e909cf83d7f. Jul 12 00:22:07.460085 containerd[1428]: time="2025-07-12T00:22:07.460038486Z" level=info msg="StartContainer for \"89cd5fd1aa1e635f69926d7fcd2198c918e91fab14bc3d1b7f7f0e909cf83d7f\" returns successfully" Jul 12 00:22:07.737686 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 12 00:22:08.174817 kubelet[2443]: E0712 00:22:08.174769 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:08.398793 kubelet[2443]: E0712 00:22:08.398765 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:08.413501 kubelet[2443]: I0712 00:22:08.413134 2443 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zht8l" podStartSLOduration=5.413107663 podStartE2EDuration="5.413107663s" podCreationTimestamp="2025-07-12 00:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 00:22:08.41270232 +0000 UTC m=+83.349616339" watchObservedRunningTime="2025-07-12 00:22:08.413107663 +0000 UTC m=+83.350021602" Jul 12 00:22:09.472455 kubelet[2443]: E0712 00:22:09.472409 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:09.702158 kubelet[2443]: E0712 00:22:09.702115 2443 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:43694->127.0.0.1:41931: write tcp 127.0.0.1:43694->127.0.0.1:41931: write: broken pipe Jul 12 00:22:10.569890 systemd-networkd[1347]: lxc_health: Link UP Jul 12 00:22:10.583851 systemd-networkd[1347]: lxc_health: Gained carrier Jul 12 00:22:11.477383 kubelet[2443]: E0712 00:22:11.476697 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:11.804290 kubelet[2443]: E0712 00:22:11.804148 2443 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48296->127.0.0.1:41931: write tcp 127.0.0.1:48296->127.0.0.1:41931: write: broken pipe Jul 12 00:22:12.064821 systemd-networkd[1347]: lxc_health: Gained IPv6LL Jul 12 00:22:12.407724 kubelet[2443]: E0712 00:22:12.407325 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:13.409721 kubelet[2443]: E0712 00:22:13.408750 2443 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 12 00:22:16.045475 sshd[4267]: pam_unix(sshd:session): session closed for user core Jul 12 00:22:16.048192 systemd[1]: sshd@25-10.0.0.97:22-10.0.0.1:59044.service: Deactivated successfully. Jul 12 00:22:16.049901 systemd[1]: session-26.scope: Deactivated successfully. Jul 12 00:22:16.051248 systemd-logind[1410]: Session 26 logged out. Waiting for processes to exit. Jul 12 00:22:16.052492 systemd-logind[1410]: Removed session 26.