Jul 9 09:55:51.872991 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 9 09:55:51.873012 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT Wed Jul 9 08:43:25 -00 2025 Jul 9 09:55:51.873022 kernel: KASLR enabled Jul 9 09:55:51.873028 kernel: efi: EFI v2.7 by EDK II Jul 9 09:55:51.873033 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbae018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40218 Jul 9 09:55:51.873039 kernel: random: crng init done Jul 9 09:55:51.873046 kernel: secureboot: Secure boot disabled Jul 9 09:55:51.873051 kernel: ACPI: Early table checksum verification disabled Jul 9 09:55:51.873057 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) Jul 9 09:55:51.873064 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 9 09:55:51.873070 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:55:51.873076 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:55:51.873082 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:55:51.873088 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:55:51.873095 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:55:51.873102 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:55:51.873108 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:55:51.873115 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:55:51.873121 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 09:55:51.873127 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 9 09:55:51.873133 kernel: NUMA: Failed to initialise from firmware Jul 9 09:55:51.873139 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 09:55:51.873145 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] Jul 9 09:55:51.873151 kernel: Zone ranges: Jul 9 09:55:51.873157 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 09:55:51.873164 kernel: DMA32 empty Jul 9 09:55:51.873170 kernel: Normal empty Jul 9 09:55:51.873176 kernel: Movable zone start for each node Jul 9 09:55:51.873182 kernel: Early memory node ranges Jul 9 09:55:51.873188 kernel: node 0: [mem 0x0000000040000000-0x00000000d967ffff] Jul 9 09:55:51.873195 kernel: node 0: [mem 0x00000000d9680000-0x00000000d968ffff] Jul 9 09:55:51.873201 kernel: node 0: [mem 0x00000000d9690000-0x00000000d976ffff] Jul 9 09:55:51.873207 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] Jul 9 09:55:51.873213 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] Jul 9 09:55:51.873219 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 9 09:55:51.873225 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 9 09:55:51.873231 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 9 09:55:51.873238 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 9 09:55:51.873244 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 9 09:55:51.873250 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 9 09:55:51.873259 kernel: psci: probing for conduit method from ACPI. Jul 9 09:55:51.873265 kernel: psci: PSCIv1.1 detected in firmware. Jul 9 09:55:51.873272 kernel: psci: Using standard PSCI v0.2 function IDs Jul 9 09:55:51.873279 kernel: psci: Trusted OS migration not required Jul 9 09:55:51.873286 kernel: psci: SMC Calling Convention v1.1 Jul 9 09:55:51.873292 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 9 09:55:51.873299 kernel: percpu: Embedded 31 pages/cpu s86696 r8192 d32088 u126976 Jul 9 09:55:51.873305 kernel: pcpu-alloc: s86696 r8192 d32088 u126976 alloc=31*4096 Jul 9 09:55:51.873312 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 9 09:55:51.873318 kernel: Detected PIPT I-cache on CPU0 Jul 9 09:55:51.873325 kernel: CPU features: detected: GIC system register CPU interface Jul 9 09:55:51.873331 kernel: CPU features: detected: Hardware dirty bit management Jul 9 09:55:51.873338 kernel: CPU features: detected: Spectre-v4 Jul 9 09:55:51.873346 kernel: CPU features: detected: Spectre-BHB Jul 9 09:55:51.873352 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 9 09:55:51.873359 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 9 09:55:51.873365 kernel: CPU features: detected: ARM erratum 1418040 Jul 9 09:55:51.873372 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 9 09:55:51.873378 kernel: alternatives: applying boot alternatives Jul 9 09:55:51.873386 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8106bab0e8f7b2aae0f933bb3a5b6e118e9ee32381eb8a383d83464922eeb861 Jul 9 09:55:51.873392 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 09:55:51.873399 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 09:55:51.873406 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 09:55:51.873412 kernel: Fallback order for Node 0: 0 Jul 9 09:55:51.873419 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Jul 9 09:55:51.873426 kernel: Policy zone: DMA Jul 9 09:55:51.873432 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 09:55:51.873439 kernel: software IO TLB: area num 4. Jul 9 09:55:51.873445 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) Jul 9 09:55:51.873452 kernel: Memory: 2387476K/2572288K available (10368K kernel code, 2186K rwdata, 8104K rodata, 38336K init, 897K bss, 184812K reserved, 0K cma-reserved) Jul 9 09:55:51.873458 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 9 09:55:51.873465 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 09:55:51.873472 kernel: rcu: RCU event tracing is enabled. Jul 9 09:55:51.873479 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 9 09:55:51.873485 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 09:55:51.873492 kernel: Tracing variant of Tasks RCU enabled. Jul 9 09:55:51.873499 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 09:55:51.873506 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 9 09:55:51.873513 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 9 09:55:51.873519 kernel: GICv3: 256 SPIs implemented Jul 9 09:55:51.873525 kernel: GICv3: 0 Extended SPIs implemented Jul 9 09:55:51.873532 kernel: Root IRQ handler: gic_handle_irq Jul 9 09:55:51.873538 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 9 09:55:51.873615 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 9 09:55:51.873623 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 9 09:55:51.873630 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) Jul 9 09:55:51.873637 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) Jul 9 09:55:51.873646 kernel: GICv3: using LPI property table @0x00000000400f0000 Jul 9 09:55:51.873652 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 Jul 9 09:55:51.873659 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 09:55:51.873665 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 09:55:51.873700 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 9 09:55:51.873708 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 9 09:55:51.873715 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 9 09:55:51.873721 kernel: arm-pv: using stolen time PV Jul 9 09:55:51.873728 kernel: Console: colour dummy device 80x25 Jul 9 09:55:51.873735 kernel: ACPI: Core revision 20230628 Jul 9 09:55:51.873742 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 9 09:55:51.873750 kernel: pid_max: default: 32768 minimum: 301 Jul 9 09:55:51.873757 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 9 09:55:51.873764 kernel: landlock: Up and running. Jul 9 09:55:51.873770 kernel: SELinux: Initializing. Jul 9 09:55:51.873777 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 09:55:51.873784 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 09:55:51.873791 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 09:55:51.873797 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 09:55:51.873804 kernel: rcu: Hierarchical SRCU implementation. Jul 9 09:55:51.873812 kernel: rcu: Max phase no-delay instances is 400. Jul 9 09:55:51.873819 kernel: Platform MSI: ITS@0x8080000 domain created Jul 9 09:55:51.873825 kernel: PCI/MSI: ITS@0x8080000 domain created Jul 9 09:55:51.873832 kernel: Remapping and enabling EFI services. Jul 9 09:55:51.873839 kernel: smp: Bringing up secondary CPUs ... Jul 9 09:55:51.873845 kernel: Detected PIPT I-cache on CPU1 Jul 9 09:55:51.873852 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 9 09:55:51.873859 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 Jul 9 09:55:51.873865 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 09:55:51.873873 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 9 09:55:51.873880 kernel: Detected PIPT I-cache on CPU2 Jul 9 09:55:51.873892 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 9 09:55:51.873900 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 Jul 9 09:55:51.873907 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 09:55:51.873914 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 9 09:55:51.873920 kernel: Detected PIPT I-cache on CPU3 Jul 9 09:55:51.873927 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 9 09:55:51.873934 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 Jul 9 09:55:51.873943 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 9 09:55:51.873950 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 9 09:55:51.873956 kernel: smp: Brought up 1 node, 4 CPUs Jul 9 09:55:51.873963 kernel: SMP: Total of 4 processors activated. Jul 9 09:55:51.873970 kernel: CPU features: detected: 32-bit EL0 Support Jul 9 09:55:51.873977 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 9 09:55:51.873984 kernel: CPU features: detected: Common not Private translations Jul 9 09:55:51.873991 kernel: CPU features: detected: CRC32 instructions Jul 9 09:55:51.874000 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 9 09:55:51.874007 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 9 09:55:51.874014 kernel: CPU features: detected: LSE atomic instructions Jul 9 09:55:51.874021 kernel: CPU features: detected: Privileged Access Never Jul 9 09:55:51.874027 kernel: CPU features: detected: RAS Extension Support Jul 9 09:55:51.874034 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 9 09:55:51.874041 kernel: CPU: All CPU(s) started at EL1 Jul 9 09:55:51.874048 kernel: alternatives: applying system-wide alternatives Jul 9 09:55:51.874055 kernel: devtmpfs: initialized Jul 9 09:55:51.874062 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 09:55:51.874071 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 9 09:55:51.874078 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 09:55:51.874085 kernel: SMBIOS 3.0.0 present. Jul 9 09:55:51.874091 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 9 09:55:51.874098 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 09:55:51.874106 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 9 09:55:51.874113 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 9 09:55:51.874120 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 9 09:55:51.874128 kernel: audit: initializing netlink subsys (disabled) Jul 9 09:55:51.874135 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Jul 9 09:55:51.874142 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 09:55:51.874149 kernel: cpuidle: using governor menu Jul 9 09:55:51.874156 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 9 09:55:51.874163 kernel: ASID allocator initialised with 32768 entries Jul 9 09:55:51.874170 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 09:55:51.874177 kernel: Serial: AMBA PL011 UART driver Jul 9 09:55:51.874184 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 9 09:55:51.874192 kernel: Modules: 0 pages in range for non-PLT usage Jul 9 09:55:51.874199 kernel: Modules: 509264 pages in range for PLT usage Jul 9 09:55:51.874206 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 09:55:51.874213 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 09:55:51.874219 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 9 09:55:51.874226 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 9 09:55:51.874233 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 09:55:51.874241 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 09:55:51.874248 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 9 09:55:51.874256 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 9 09:55:51.874263 kernel: ACPI: Added _OSI(Module Device) Jul 9 09:55:51.874270 kernel: ACPI: Added _OSI(Processor Device) Jul 9 09:55:51.874277 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 09:55:51.874283 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 09:55:51.874290 kernel: ACPI: Interpreter enabled Jul 9 09:55:51.874297 kernel: ACPI: Using GIC for interrupt routing Jul 9 09:55:51.874304 kernel: ACPI: MCFG table detected, 1 entries Jul 9 09:55:51.874311 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 9 09:55:51.874318 kernel: printk: console [ttyAMA0] enabled Jul 9 09:55:51.874326 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 9 09:55:51.874468 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 9 09:55:51.874559 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 9 09:55:51.874634 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 9 09:55:51.874711 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 9 09:55:51.874777 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 9 09:55:51.874786 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 9 09:55:51.874797 kernel: PCI host bridge to bus 0000:00 Jul 9 09:55:51.874868 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 9 09:55:51.874929 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 9 09:55:51.874986 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 9 09:55:51.875043 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 9 09:55:51.875123 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Jul 9 09:55:51.875202 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Jul 9 09:55:51.875268 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Jul 9 09:55:51.875333 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Jul 9 09:55:51.875398 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Jul 9 09:55:51.875463 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Jul 9 09:55:51.875528 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Jul 9 09:55:51.875624 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Jul 9 09:55:51.875700 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 9 09:55:51.875762 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 9 09:55:51.875821 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 9 09:55:51.875830 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 9 09:55:51.875837 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 9 09:55:51.875844 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 9 09:55:51.875851 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 9 09:55:51.875858 kernel: iommu: Default domain type: Translated Jul 9 09:55:51.875868 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 9 09:55:51.875875 kernel: efivars: Registered efivars operations Jul 9 09:55:51.875882 kernel: vgaarb: loaded Jul 9 09:55:51.875889 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 9 09:55:51.875896 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 09:55:51.875903 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 09:55:51.875910 kernel: pnp: PnP ACPI init Jul 9 09:55:51.875982 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 9 09:55:51.875994 kernel: pnp: PnP ACPI: found 1 devices Jul 9 09:55:51.876001 kernel: NET: Registered PF_INET protocol family Jul 9 09:55:51.876008 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 09:55:51.876015 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 09:55:51.876022 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 09:55:51.876029 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 09:55:51.876037 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 09:55:51.876044 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 09:55:51.876051 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 09:55:51.876059 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 09:55:51.876066 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 09:55:51.876073 kernel: PCI: CLS 0 bytes, default 64 Jul 9 09:55:51.876080 kernel: kvm [1]: HYP mode not available Jul 9 09:55:51.876087 kernel: Initialise system trusted keyrings Jul 9 09:55:51.876094 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 09:55:51.876101 kernel: Key type asymmetric registered Jul 9 09:55:51.876108 kernel: Asymmetric key parser 'x509' registered Jul 9 09:55:51.876114 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 9 09:55:51.876123 kernel: io scheduler mq-deadline registered Jul 9 09:55:51.876130 kernel: io scheduler kyber registered Jul 9 09:55:51.876137 kernel: io scheduler bfq registered Jul 9 09:55:51.876144 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 9 09:55:51.876151 kernel: ACPI: button: Power Button [PWRB] Jul 9 09:55:51.876158 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 9 09:55:51.876226 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 9 09:55:51.876235 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 09:55:51.876243 kernel: thunder_xcv, ver 1.0 Jul 9 09:55:51.876250 kernel: thunder_bgx, ver 1.0 Jul 9 09:55:51.876259 kernel: nicpf, ver 1.0 Jul 9 09:55:51.876265 kernel: nicvf, ver 1.0 Jul 9 09:55:51.876339 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 9 09:55:51.876401 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-09T09:55:51 UTC (1752054951) Jul 9 09:55:51.876410 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 9 09:55:51.876417 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Jul 9 09:55:51.876424 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jul 9 09:55:51.876433 kernel: watchdog: Hard watchdog permanently disabled Jul 9 09:55:51.876440 kernel: NET: Registered PF_INET6 protocol family Jul 9 09:55:51.876447 kernel: Segment Routing with IPv6 Jul 9 09:55:51.876454 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 09:55:51.876460 kernel: NET: Registered PF_PACKET protocol family Jul 9 09:55:51.876467 kernel: Key type dns_resolver registered Jul 9 09:55:51.876474 kernel: registered taskstats version 1 Jul 9 09:55:51.876481 kernel: Loading compiled-in X.509 certificates Jul 9 09:55:51.876488 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: 41c1c5381fe8e35ae22346a08da6d25fbc0dc23e' Jul 9 09:55:51.876495 kernel: Key type .fscrypt registered Jul 9 09:55:51.876503 kernel: Key type fscrypt-provisioning registered Jul 9 09:55:51.876511 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 09:55:51.876518 kernel: ima: Allocated hash algorithm: sha1 Jul 9 09:55:51.876524 kernel: ima: No architecture policies found Jul 9 09:55:51.876531 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 9 09:55:51.876538 kernel: clk: Disabling unused clocks Jul 9 09:55:51.876559 kernel: Freeing unused kernel memory: 38336K Jul 9 09:55:51.876567 kernel: Run /init as init process Jul 9 09:55:51.876576 kernel: with arguments: Jul 9 09:55:51.876582 kernel: /init Jul 9 09:55:51.876589 kernel: with environment: Jul 9 09:55:51.876596 kernel: HOME=/ Jul 9 09:55:51.876603 kernel: TERM=linux Jul 9 09:55:51.876609 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 09:55:51.876617 systemd[1]: Successfully made /usr/ read-only. Jul 9 09:55:51.876627 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 09:55:51.876637 systemd[1]: Detected virtualization kvm. Jul 9 09:55:51.876644 systemd[1]: Detected architecture arm64. Jul 9 09:55:51.876651 systemd[1]: Running in initrd. Jul 9 09:55:51.876658 systemd[1]: No hostname configured, using default hostname. Jul 9 09:55:51.876666 systemd[1]: Hostname set to . Jul 9 09:55:51.876681 systemd[1]: Initializing machine ID from VM UUID. Jul 9 09:55:51.876689 systemd[1]: Queued start job for default target initrd.target. Jul 9 09:55:51.876697 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 09:55:51.876706 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 09:55:51.876714 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 09:55:51.876722 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 09:55:51.876729 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 09:55:51.876738 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 09:55:51.876746 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 09:55:51.876754 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 09:55:51.876763 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 09:55:51.876771 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 09:55:51.876778 systemd[1]: Reached target paths.target - Path Units. Jul 9 09:55:51.876786 systemd[1]: Reached target slices.target - Slice Units. Jul 9 09:55:51.876793 systemd[1]: Reached target swap.target - Swaps. Jul 9 09:55:51.876800 systemd[1]: Reached target timers.target - Timer Units. Jul 9 09:55:51.876808 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 09:55:51.876815 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 09:55:51.876823 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 09:55:51.876832 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 09:55:51.876840 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 09:55:51.876847 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 09:55:51.876855 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 09:55:51.876863 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 09:55:51.876870 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 09:55:51.876878 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 09:55:51.876885 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 09:55:51.876894 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 09:55:51.876901 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 09:55:51.876909 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 09:55:51.876916 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 09:55:51.876924 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 09:55:51.876931 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 09:55:51.876940 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 09:55:51.876948 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 09:55:51.876956 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:55:51.876984 systemd-journald[240]: Collecting audit messages is disabled. Jul 9 09:55:51.877006 systemd-journald[240]: Journal started Jul 9 09:55:51.877023 systemd-journald[240]: Runtime Journal (/run/log/journal/3e04392f27884d7ab0789e89a26120ed) is 5.9M, max 47.3M, 41.4M free. Jul 9 09:55:51.868393 systemd-modules-load[241]: Inserted module 'overlay' Jul 9 09:55:51.883608 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 09:55:51.883646 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 09:55:51.885758 systemd-modules-load[241]: Inserted module 'br_netfilter' Jul 9 09:55:51.887325 kernel: Bridge firewalling registered Jul 9 09:55:51.887343 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 09:55:51.889157 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 09:55:51.890153 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 09:55:51.895665 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 09:55:51.897220 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 09:55:51.900732 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 09:55:51.909589 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 09:55:51.910637 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 09:55:51.912599 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 09:55:51.928829 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 09:55:51.929855 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 09:55:51.934522 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 09:55:51.953104 dracut-cmdline[282]: dracut-dracut-053 Jul 9 09:55:51.957241 dracut-cmdline[282]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=8106bab0e8f7b2aae0f933bb3a5b6e118e9ee32381eb8a383d83464922eeb861 Jul 9 09:55:51.964231 systemd-resolved[279]: Positive Trust Anchors: Jul 9 09:55:51.964250 systemd-resolved[279]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 09:55:51.964282 systemd-resolved[279]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 09:55:51.969001 systemd-resolved[279]: Defaulting to hostname 'linux'. Jul 9 09:55:51.969971 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 09:55:51.972365 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 09:55:52.047579 kernel: SCSI subsystem initialized Jul 9 09:55:52.052561 kernel: Loading iSCSI transport class v2.0-870. Jul 9 09:55:52.059565 kernel: iscsi: registered transport (tcp) Jul 9 09:55:52.073793 kernel: iscsi: registered transport (qla4xxx) Jul 9 09:55:52.073840 kernel: QLogic iSCSI HBA Driver Jul 9 09:55:52.116605 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 09:55:52.126698 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 09:55:52.143006 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 09:55:52.143070 kernel: device-mapper: uevent: version 1.0.3 Jul 9 09:55:52.143834 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 9 09:55:52.193586 kernel: raid6: neonx8 gen() 15788 MB/s Jul 9 09:55:52.207560 kernel: raid6: neonx4 gen() 15827 MB/s Jul 9 09:55:52.224567 kernel: raid6: neonx2 gen() 11598 MB/s Jul 9 09:55:52.241561 kernel: raid6: neonx1 gen() 10502 MB/s Jul 9 09:55:52.258561 kernel: raid6: int64x8 gen() 6789 MB/s Jul 9 09:55:52.275557 kernel: raid6: int64x4 gen() 7346 MB/s Jul 9 09:55:52.292560 kernel: raid6: int64x2 gen() 6108 MB/s Jul 9 09:55:52.309561 kernel: raid6: int64x1 gen() 5056 MB/s Jul 9 09:55:52.309582 kernel: raid6: using algorithm neonx4 gen() 15827 MB/s Jul 9 09:55:52.326566 kernel: raid6: .... xor() 12460 MB/s, rmw enabled Jul 9 09:55:52.326582 kernel: raid6: using neon recovery algorithm Jul 9 09:55:52.331689 kernel: xor: measuring software checksum speed Jul 9 09:55:52.331709 kernel: 8regs : 21505 MB/sec Jul 9 09:55:52.332694 kernel: 32regs : 21687 MB/sec Jul 9 09:55:52.332713 kernel: arm64_neon : 28013 MB/sec Jul 9 09:55:52.332722 kernel: xor: using function: arm64_neon (28013 MB/sec) Jul 9 09:55:52.387573 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 09:55:52.402237 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 09:55:52.412730 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 09:55:52.429565 systemd-udevd[464]: Using default interface naming scheme 'v255'. Jul 9 09:55:52.434041 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 09:55:52.439724 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 09:55:52.454578 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation Jul 9 09:55:52.485655 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 09:55:52.501759 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 09:55:52.563773 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 09:55:52.569961 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 09:55:52.588600 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 09:55:52.592481 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 09:55:52.594129 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 09:55:52.596283 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 09:55:52.603830 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 09:55:52.614074 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 09:55:52.625376 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 9 09:55:52.625566 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 9 09:55:52.629937 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 09:55:52.630052 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 09:55:52.632860 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 09:55:52.634157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 09:55:52.642389 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 09:55:52.642414 kernel: GPT:9289727 != 19775487 Jul 9 09:55:52.642424 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 09:55:52.642434 kernel: GPT:9289727 != 19775487 Jul 9 09:55:52.642450 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 09:55:52.642460 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 09:55:52.634303 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:55:52.640259 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 09:55:52.650023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 09:55:52.660577 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (522) Jul 9 09:55:52.660630 kernel: BTRFS: device fsid c58bb8ce-1b39-452b-9f5f-454a5cd013ab devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (512) Jul 9 09:55:52.665009 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:55:52.677646 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 9 09:55:52.685027 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 9 09:55:52.696819 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 09:55:52.702945 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 9 09:55:52.704031 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 9 09:55:52.717714 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 09:55:52.721737 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 09:55:52.724429 disk-uuid[554]: Primary Header is updated. Jul 9 09:55:52.724429 disk-uuid[554]: Secondary Entries is updated. Jul 9 09:55:52.724429 disk-uuid[554]: Secondary Header is updated. Jul 9 09:55:52.728002 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 09:55:52.744371 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 09:55:53.742576 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 09:55:53.742633 disk-uuid[555]: The operation has completed successfully. Jul 9 09:55:53.768734 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 09:55:53.768831 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 09:55:53.801709 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 09:55:53.804532 sh[574]: Success Jul 9 09:55:53.818613 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 9 09:55:53.856376 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 09:55:53.857961 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 09:55:53.858704 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 09:55:53.869614 kernel: BTRFS info (device dm-0): first mount of filesystem c58bb8ce-1b39-452b-9f5f-454a5cd013ab Jul 9 09:55:53.869680 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 9 09:55:53.869703 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 9 09:55:53.870910 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 9 09:55:53.870924 kernel: BTRFS info (device dm-0): using free space tree Jul 9 09:55:53.874580 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 09:55:53.875713 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 09:55:53.881712 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 09:55:53.883128 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 09:55:53.896801 kernel: BTRFS info (device vda6): first mount of filesystem 1c7652f0-6738-4f88-afc7-d770f62208e3 Jul 9 09:55:53.896855 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 09:55:53.897554 kernel: BTRFS info (device vda6): using free space tree Jul 9 09:55:53.899734 kernel: BTRFS info (device vda6): auto enabling async discard Jul 9 09:55:53.904574 kernel: BTRFS info (device vda6): last unmount of filesystem 1c7652f0-6738-4f88-afc7-d770f62208e3 Jul 9 09:55:53.907982 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 09:55:53.914745 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 09:55:53.979278 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 09:55:53.988737 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 09:55:54.012588 ignition[665]: Ignition 2.20.0 Jul 9 09:55:54.012598 ignition[665]: Stage: fetch-offline Jul 9 09:55:54.012631 ignition[665]: no configs at "/usr/lib/ignition/base.d" Jul 9 09:55:54.012640 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 09:55:54.012814 ignition[665]: parsed url from cmdline: "" Jul 9 09:55:54.015284 systemd-networkd[764]: lo: Link UP Jul 9 09:55:54.012818 ignition[665]: no config URL provided Jul 9 09:55:54.015287 systemd-networkd[764]: lo: Gained carrier Jul 9 09:55:54.012822 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 09:55:54.016109 systemd-networkd[764]: Enumeration completed Jul 9 09:55:54.012830 ignition[665]: no config at "/usr/lib/ignition/user.ign" Jul 9 09:55:54.016528 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 09:55:54.012852 ignition[665]: op(1): [started] loading QEMU firmware config module Jul 9 09:55:54.016532 systemd-networkd[764]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 09:55:54.012857 ignition[665]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 9 09:55:54.016826 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 09:55:54.020762 ignition[665]: op(1): [finished] loading QEMU firmware config module Jul 9 09:55:54.017263 systemd-networkd[764]: eth0: Link UP Jul 9 09:55:54.017267 systemd-networkd[764]: eth0: Gained carrier Jul 9 09:55:54.017273 systemd-networkd[764]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 09:55:54.017931 systemd[1]: Reached target network.target - Network. Jul 9 09:55:54.030588 systemd-networkd[764]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 09:55:54.064928 ignition[665]: parsing config with SHA512: d801adde5f8a878212b064f996e387b46bc2d6a2fbac239f42ad24498b9533003ebffc413ac00db0c5232bcc6fae5ab27d14007850f404eaabbc8ce9e467c80b Jul 9 09:55:54.070972 unknown[665]: fetched base config from "system" Jul 9 09:55:54.070983 unknown[665]: fetched user config from "qemu" Jul 9 09:55:54.071416 ignition[665]: fetch-offline: fetch-offline passed Jul 9 09:55:54.073257 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 09:55:54.071495 ignition[665]: Ignition finished successfully Jul 9 09:55:54.074326 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 9 09:55:54.081720 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 09:55:54.094517 ignition[771]: Ignition 2.20.0 Jul 9 09:55:54.094527 ignition[771]: Stage: kargs Jul 9 09:55:54.094716 ignition[771]: no configs at "/usr/lib/ignition/base.d" Jul 9 09:55:54.094726 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 09:55:54.095607 ignition[771]: kargs: kargs passed Jul 9 09:55:54.095653 ignition[771]: Ignition finished successfully Jul 9 09:55:54.098649 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 09:55:54.105741 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 09:55:54.115241 ignition[780]: Ignition 2.20.0 Jul 9 09:55:54.115250 ignition[780]: Stage: disks Jul 9 09:55:54.115426 ignition[780]: no configs at "/usr/lib/ignition/base.d" Jul 9 09:55:54.115436 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 09:55:54.116337 ignition[780]: disks: disks passed Jul 9 09:55:54.116379 ignition[780]: Ignition finished successfully Jul 9 09:55:54.119036 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 09:55:54.119949 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 09:55:54.121124 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 09:55:54.122588 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 09:55:54.123992 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 09:55:54.125241 systemd[1]: Reached target basic.target - Basic System. Jul 9 09:55:54.138742 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 09:55:54.148284 systemd-fsck[791]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 9 09:55:54.152479 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 09:55:54.168709 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 09:55:54.210450 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 09:55:54.211607 kernel: EXT4-fs (vda9): mounted filesystem 2188e201-95c4-4c53-8dbf-8a24eaac44bf r/w with ordered data mode. Quota mode: none. Jul 9 09:55:54.211503 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 09:55:54.221646 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 09:55:54.223237 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 09:55:54.224255 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 09:55:54.224293 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 09:55:54.229572 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (799) Jul 9 09:55:54.224316 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 09:55:54.232502 kernel: BTRFS info (device vda6): first mount of filesystem 1c7652f0-6738-4f88-afc7-d770f62208e3 Jul 9 09:55:54.232521 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 09:55:54.232531 kernel: BTRFS info (device vda6): using free space tree Jul 9 09:55:54.230610 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 09:55:54.233993 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 09:55:54.235709 kernel: BTRFS info (device vda6): auto enabling async discard Jul 9 09:55:54.236318 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 09:55:54.277228 initrd-setup-root[823]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 09:55:54.281305 initrd-setup-root[830]: cut: /sysroot/etc/group: No such file or directory Jul 9 09:55:54.285023 initrd-setup-root[837]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 09:55:54.288489 initrd-setup-root[844]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 09:55:54.355569 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 09:55:54.362657 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 09:55:54.363989 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 09:55:54.368559 kernel: BTRFS info (device vda6): last unmount of filesystem 1c7652f0-6738-4f88-afc7-d770f62208e3 Jul 9 09:55:54.384269 ignition[912]: INFO : Ignition 2.20.0 Jul 9 09:55:54.384269 ignition[912]: INFO : Stage: mount Jul 9 09:55:54.385565 ignition[912]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 09:55:54.385565 ignition[912]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 09:55:54.385565 ignition[912]: INFO : mount: mount passed Jul 9 09:55:54.385565 ignition[912]: INFO : Ignition finished successfully Jul 9 09:55:54.388516 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 09:55:54.389386 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 09:55:54.400668 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 09:55:54.869018 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 09:55:54.878774 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 09:55:54.886559 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (926) Jul 9 09:55:54.888294 kernel: BTRFS info (device vda6): first mount of filesystem 1c7652f0-6738-4f88-afc7-d770f62208e3 Jul 9 09:55:54.888314 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 9 09:55:54.888838 kernel: BTRFS info (device vda6): using free space tree Jul 9 09:55:54.892578 kernel: BTRFS info (device vda6): auto enabling async discard Jul 9 09:55:54.893768 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 09:55:54.915100 ignition[943]: INFO : Ignition 2.20.0 Jul 9 09:55:54.915100 ignition[943]: INFO : Stage: files Jul 9 09:55:54.916478 ignition[943]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 09:55:54.916478 ignition[943]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 09:55:54.916478 ignition[943]: DEBUG : files: compiled without relabeling support, skipping Jul 9 09:55:54.920067 ignition[943]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 09:55:54.920067 ignition[943]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 09:55:54.920067 ignition[943]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 09:55:54.923421 ignition[943]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 09:55:54.923421 ignition[943]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 09:55:54.923421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 9 09:55:54.923421 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 9 09:55:54.920839 unknown[943]: wrote ssh authorized keys file for user: core Jul 9 09:55:55.027707 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 09:55:55.059690 systemd-networkd[764]: eth0: Gained IPv6LL Jul 9 09:55:55.667654 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 9 09:55:55.669199 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 09:55:55.669199 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 9 09:55:56.027839 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 9 09:55:56.068770 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 09:55:56.070173 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 9 09:55:56.070173 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 09:55:56.070173 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 09:55:56.070173 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 09:55:56.070173 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 09:55:56.070173 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 09:55:56.070173 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 09:55:56.070173 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 09:55:56.070173 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 09:55:56.070173 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 09:55:56.070173 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 09:55:56.070173 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 09:55:56.070173 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 09:55:56.070173 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 9 09:55:56.479149 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 9 09:55:56.721192 ignition[943]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 9 09:55:56.721192 ignition[943]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 9 09:55:56.723984 ignition[943]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 09:55:56.723984 ignition[943]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 09:55:56.723984 ignition[943]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 9 09:55:56.723984 ignition[943]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 9 09:55:56.723984 ignition[943]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 09:55:56.723984 ignition[943]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 09:55:56.723984 ignition[943]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 9 09:55:56.723984 ignition[943]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 9 09:55:56.738868 ignition[943]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 09:55:56.742175 ignition[943]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 09:55:56.744310 ignition[943]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 9 09:55:56.744310 ignition[943]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 9 09:55:56.744310 ignition[943]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 09:55:56.744310 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 09:55:56.744310 ignition[943]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 09:55:56.744310 ignition[943]: INFO : files: files passed Jul 9 09:55:56.744310 ignition[943]: INFO : Ignition finished successfully Jul 9 09:55:56.745234 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 09:55:56.760705 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 09:55:56.762226 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 09:55:56.764488 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 09:55:56.765683 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 09:55:56.770287 initrd-setup-root-after-ignition[972]: grep: /sysroot/oem/oem-release: No such file or directory Jul 9 09:55:56.772781 initrd-setup-root-after-ignition[974]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 09:55:56.772781 initrd-setup-root-after-ignition[974]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 09:55:56.775373 initrd-setup-root-after-ignition[978]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 09:55:56.776325 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 09:55:56.777642 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 09:55:56.791781 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 09:55:56.810324 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 09:55:56.810438 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 09:55:56.812192 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 09:55:56.813552 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 09:55:56.815058 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 09:55:56.815895 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 09:55:56.831046 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 09:55:56.847750 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 09:55:56.855415 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 09:55:56.856415 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 09:55:56.857922 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 09:55:56.859241 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 09:55:56.859373 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 09:55:56.861137 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 09:55:56.862580 systemd[1]: Stopped target basic.target - Basic System. Jul 9 09:55:56.863844 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 09:55:56.865200 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 09:55:56.866649 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 09:55:56.868154 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 09:55:56.869523 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 09:55:56.871023 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 09:55:56.872466 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 09:55:56.873904 systemd[1]: Stopped target swap.target - Swaps. Jul 9 09:55:56.875043 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 09:55:56.875177 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 09:55:56.876889 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 09:55:56.878343 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 09:55:56.879771 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 09:55:56.880624 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 09:55:56.881979 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 09:55:56.882105 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 09:55:56.884113 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 09:55:56.884245 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 09:55:56.885691 systemd[1]: Stopped target paths.target - Path Units. Jul 9 09:55:56.886861 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 09:55:56.891612 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 09:55:56.892649 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 09:55:56.894190 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 09:55:56.895341 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 09:55:56.895429 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 09:55:56.896510 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 09:55:56.896601 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 09:55:56.897706 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 09:55:56.897823 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 09:55:56.899073 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 09:55:56.899190 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 09:55:56.910777 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 09:55:56.911525 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 09:55:56.911703 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 09:55:56.914749 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 09:55:56.916152 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 09:55:56.916294 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 09:55:56.917308 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 09:55:56.917408 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 09:55:56.923325 ignition[998]: INFO : Ignition 2.20.0 Jul 9 09:55:56.923325 ignition[998]: INFO : Stage: umount Jul 9 09:55:56.924704 ignition[998]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 09:55:56.924704 ignition[998]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 09:55:56.924704 ignition[998]: INFO : umount: umount passed Jul 9 09:55:56.924704 ignition[998]: INFO : Ignition finished successfully Jul 9 09:55:56.924954 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 09:55:56.925054 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 09:55:56.927006 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 09:55:56.927090 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 09:55:56.928995 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 09:55:56.930175 systemd[1]: Stopped target network.target - Network. Jul 9 09:55:56.930975 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 09:55:56.931045 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 09:55:56.932540 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 09:55:56.932672 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 09:55:56.934139 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 09:55:56.934187 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 09:55:56.935815 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 09:55:56.935860 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 09:55:56.937627 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 09:55:56.939035 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 09:55:56.942329 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 09:55:56.942440 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 09:55:56.946449 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 09:55:56.946819 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 09:55:56.946867 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 09:55:56.949585 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 09:55:56.952299 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 09:55:56.952405 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 09:55:56.955770 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 09:55:56.956004 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 09:55:56.956043 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 09:55:56.966671 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 09:55:56.967378 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 09:55:56.967449 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 09:55:56.969041 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 09:55:56.969091 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 09:55:56.972041 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 09:55:56.972092 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 09:55:56.973240 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 09:55:56.976471 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 09:55:56.982567 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 09:55:56.982694 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 09:55:56.986689 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 09:55:56.986860 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 09:55:56.988481 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 09:55:56.988633 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 09:55:56.990452 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 09:55:56.990528 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 09:55:56.991419 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 09:55:56.991474 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 09:55:56.993790 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 09:55:56.993851 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 09:55:56.996267 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 09:55:56.996329 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 09:55:57.007749 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 09:55:57.008529 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 09:55:57.008627 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 09:55:57.011093 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 09:55:57.011145 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:55:57.014203 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 09:55:57.014327 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 09:55:57.015833 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 09:55:57.015939 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 09:55:57.018043 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 09:55:57.019381 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 09:55:57.019449 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 09:55:57.021842 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 09:55:57.032433 systemd[1]: Switching root. Jul 9 09:55:57.061717 systemd-journald[240]: Journal stopped Jul 9 09:55:57.880609 systemd-journald[240]: Received SIGTERM from PID 1 (systemd). Jul 9 09:55:57.880682 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 09:55:57.880695 kernel: SELinux: policy capability open_perms=1 Jul 9 09:55:57.880705 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 09:55:57.880714 kernel: SELinux: policy capability always_check_network=0 Jul 9 09:55:57.880723 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 09:55:57.880732 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 09:55:57.880741 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 09:55:57.880751 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 09:55:57.880763 kernel: audit: type=1403 audit(1752054957.236:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 09:55:57.880774 systemd[1]: Successfully loaded SELinux policy in 34.804ms. Jul 9 09:55:57.880790 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.124ms. Jul 9 09:55:57.880801 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 09:55:57.880812 systemd[1]: Detected virtualization kvm. Jul 9 09:55:57.880822 systemd[1]: Detected architecture arm64. Jul 9 09:55:57.880832 systemd[1]: Detected first boot. Jul 9 09:55:57.880842 systemd[1]: Initializing machine ID from VM UUID. Jul 9 09:55:57.880852 zram_generator::config[1044]: No configuration found. Jul 9 09:55:57.880864 kernel: NET: Registered PF_VSOCK protocol family Jul 9 09:55:57.880873 systemd[1]: Populated /etc with preset unit settings. Jul 9 09:55:57.880884 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 09:55:57.880894 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 09:55:57.880904 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 09:55:57.880914 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 09:55:57.880924 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 09:55:57.880934 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 09:55:57.880946 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 09:55:57.880956 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 09:55:57.880966 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 09:55:57.880977 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 09:55:57.880987 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 09:55:57.880997 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 09:55:57.881007 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 09:55:57.881017 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 09:55:57.881028 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 09:55:57.881039 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 09:55:57.881050 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 09:55:57.881060 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 09:55:57.881070 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 9 09:55:57.881080 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 09:55:57.881090 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 09:55:57.881100 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 09:55:57.881112 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 09:55:57.881123 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 09:55:57.881133 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 09:55:57.881143 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 09:55:57.881155 systemd[1]: Reached target slices.target - Slice Units. Jul 9 09:55:57.881165 systemd[1]: Reached target swap.target - Swaps. Jul 9 09:55:57.881175 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 09:55:57.881185 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 09:55:57.881196 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 09:55:57.881207 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 09:55:57.881217 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 09:55:57.881228 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 09:55:57.881238 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 09:55:57.881248 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 09:55:57.881258 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 09:55:57.881268 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 09:55:57.881278 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 09:55:57.881288 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 09:55:57.881299 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 09:55:57.881311 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 09:55:57.881321 systemd[1]: Reached target machines.target - Containers. Jul 9 09:55:57.881331 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 09:55:57.881341 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 09:55:57.881352 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 09:55:57.881362 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 09:55:57.881373 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 09:55:57.881385 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 09:55:57.881397 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 09:55:57.881407 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 09:55:57.881421 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 09:55:57.881434 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 09:55:57.881444 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 09:55:57.881454 kernel: fuse: init (API version 7.39) Jul 9 09:55:57.881464 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 09:55:57.881474 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 09:55:57.881485 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 09:55:57.881496 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 09:55:57.881510 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 09:55:57.881520 kernel: ACPI: bus type drm_connector registered Jul 9 09:55:57.881529 kernel: loop: module loaded Jul 9 09:55:57.881539 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 09:55:57.881568 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 09:55:57.881580 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 09:55:57.881590 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 09:55:57.881603 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 09:55:57.881613 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 09:55:57.881623 systemd[1]: Stopped verity-setup.service. Jul 9 09:55:57.881634 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 09:55:57.881648 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 09:55:57.881666 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 09:55:57.881676 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 09:55:57.881686 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 09:55:57.881696 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 09:55:57.881727 systemd-journald[1116]: Collecting audit messages is disabled. Jul 9 09:55:57.881748 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 09:55:57.881759 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 09:55:57.881771 systemd-journald[1116]: Journal started Jul 9 09:55:57.881807 systemd-journald[1116]: Runtime Journal (/run/log/journal/3e04392f27884d7ab0789e89a26120ed) is 5.9M, max 47.3M, 41.4M free. Jul 9 09:55:57.676722 systemd[1]: Queued start job for default target multi-user.target. Jul 9 09:55:57.693491 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 9 09:55:57.693894 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 09:55:57.884610 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 09:55:57.885321 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 09:55:57.885500 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 09:55:57.886651 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 09:55:57.886825 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 09:55:57.887885 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 09:55:57.888034 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 09:55:57.889191 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 09:55:57.889361 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 09:55:57.890498 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 09:55:57.890709 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 09:55:57.891732 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 09:55:57.891882 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 09:55:57.892957 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 09:55:57.894039 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 09:55:57.895483 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 09:55:57.896736 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 09:55:57.909128 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 09:55:57.922711 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 09:55:57.924535 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 09:55:57.925328 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 09:55:57.925364 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 09:55:57.927022 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 09:55:57.928873 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 09:55:57.932729 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 09:55:57.934421 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 09:55:57.935693 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 09:55:57.937884 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 09:55:57.938758 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 09:55:57.939699 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 09:55:57.940476 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 09:55:57.942721 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 09:55:57.947666 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 09:55:57.948248 systemd-journald[1116]: Time spent on flushing to /var/log/journal/3e04392f27884d7ab0789e89a26120ed is 15.838ms for 868 entries. Jul 9 09:55:57.948248 systemd-journald[1116]: System Journal (/var/log/journal/3e04392f27884d7ab0789e89a26120ed) is 8M, max 195.6M, 187.6M free. Jul 9 09:55:57.973769 systemd-journald[1116]: Received client request to flush runtime journal. Jul 9 09:55:57.973832 kernel: loop0: detected capacity change from 0 to 211168 Jul 9 09:55:57.950730 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 09:55:57.953805 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 09:55:57.955868 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 09:55:57.956842 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 09:55:57.958262 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 09:55:57.962167 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 09:55:57.968378 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 09:55:57.984131 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 09:55:57.986630 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 09:55:57.989662 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 9 09:55:57.992004 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 09:55:57.995135 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 09:55:58.005308 udevadm[1176]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 9 09:55:58.011946 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 09:55:58.023908 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 09:55:58.025259 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 09:55:58.030643 kernel: loop1: detected capacity change from 0 to 123192 Jul 9 09:55:58.051884 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jul 9 09:55:58.051901 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jul 9 09:55:58.056704 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 09:55:58.062579 kernel: loop2: detected capacity change from 0 to 113512 Jul 9 09:55:58.108012 kernel: loop3: detected capacity change from 0 to 211168 Jul 9 09:55:58.115585 kernel: loop4: detected capacity change from 0 to 123192 Jul 9 09:55:58.121574 kernel: loop5: detected capacity change from 0 to 113512 Jul 9 09:55:58.126144 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 9 09:55:58.126557 (sd-merge)[1191]: Merged extensions into '/usr'. Jul 9 09:55:58.130491 systemd[1]: Reload requested from client PID 1162 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 09:55:58.130505 systemd[1]: Reloading... Jul 9 09:55:58.181666 zram_generator::config[1217]: No configuration found. Jul 9 09:55:58.262752 ldconfig[1157]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 09:55:58.287971 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 09:55:58.337420 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 09:55:58.337826 systemd[1]: Reloading finished in 206 ms. Jul 9 09:55:58.352371 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 09:55:58.355580 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 09:55:58.369857 systemd[1]: Starting ensure-sysext.service... Jul 9 09:55:58.371579 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 09:55:58.387422 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 09:55:58.387643 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 09:55:58.388281 systemd-tmpfiles[1256]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 09:55:58.388485 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jul 9 09:55:58.388531 systemd-tmpfiles[1256]: ACLs are not supported, ignoring. Jul 9 09:55:58.390576 systemd[1]: Reload requested from client PID 1255 ('systemctl') (unit ensure-sysext.service)... Jul 9 09:55:58.390588 systemd[1]: Reloading... Jul 9 09:55:58.391328 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 09:55:58.391333 systemd-tmpfiles[1256]: Skipping /boot Jul 9 09:55:58.399787 systemd-tmpfiles[1256]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 09:55:58.399803 systemd-tmpfiles[1256]: Skipping /boot Jul 9 09:55:58.434580 zram_generator::config[1288]: No configuration found. Jul 9 09:55:58.513517 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 09:55:58.564022 systemd[1]: Reloading finished in 173 ms. Jul 9 09:55:58.579582 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 09:55:58.595607 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 09:55:58.603089 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 09:55:58.605392 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 09:55:58.607732 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 09:55:58.611167 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 09:55:58.616087 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 09:55:58.620312 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 09:55:58.624259 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 09:55:58.627153 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 09:55:58.632714 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 09:55:58.635923 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 09:55:58.639848 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 09:55:58.639957 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 09:55:58.640997 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 09:55:58.642692 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 09:55:58.642839 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 09:55:58.648162 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 09:55:58.648334 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 09:55:58.649912 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 09:55:58.650061 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 09:55:58.656900 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 09:55:58.660479 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 09:55:58.661752 systemd-udevd[1326]: Using default interface naming scheme 'v255'. Jul 9 09:55:58.668857 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 09:55:58.672490 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 09:55:58.675901 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 09:55:58.678213 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 09:55:58.679338 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 09:55:58.679575 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 09:55:58.682777 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 09:55:58.687630 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 09:55:58.689797 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 09:55:58.691346 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 09:55:58.692830 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 09:55:58.692985 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 09:55:58.694289 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 09:55:58.695608 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 09:55:58.698233 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 09:55:58.698390 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 09:55:58.707508 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 09:55:58.707735 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 09:55:58.709194 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 09:55:58.713077 systemd[1]: Finished ensure-sysext.service. Jul 9 09:55:58.719355 augenrules[1384]: No rules Jul 9 09:55:58.720177 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 09:55:58.720857 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 09:55:58.736762 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 09:55:58.737555 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 09:55:58.737624 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 09:55:58.742152 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 09:55:58.743074 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 09:55:58.746236 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 09:55:58.752445 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 9 09:55:58.761573 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1373) Jul 9 09:55:58.820310 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 09:55:58.821394 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 09:55:58.826235 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 09:55:58.835813 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 09:55:58.858455 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 09:55:58.860914 systemd-networkd[1392]: lo: Link UP Jul 9 09:55:58.860921 systemd-networkd[1392]: lo: Gained carrier Jul 9 09:55:58.861867 systemd-networkd[1392]: Enumeration completed Jul 9 09:55:58.862657 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 09:55:58.862662 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 09:55:58.862667 systemd-networkd[1392]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 09:55:58.863378 systemd-networkd[1392]: eth0: Link UP Jul 9 09:55:58.863385 systemd-networkd[1392]: eth0: Gained carrier Jul 9 09:55:58.863399 systemd-networkd[1392]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 09:55:58.866767 systemd-resolved[1325]: Positive Trust Anchors: Jul 9 09:55:58.866783 systemd-resolved[1325]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 09:55:58.866817 systemd-resolved[1325]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 09:55:58.873099 systemd-resolved[1325]: Defaulting to hostname 'linux'. Jul 9 09:55:58.873884 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 09:55:58.876794 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 09:55:58.877901 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 09:55:58.879285 systemd[1]: Reached target network.target - Network. Jul 9 09:55:58.879625 systemd-networkd[1392]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 09:55:58.880007 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 09:55:58.882995 systemd-timesyncd[1394]: Network configuration changed, trying to establish connection. Jul 9 09:55:58.884011 systemd-timesyncd[1394]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 9 09:55:58.884063 systemd-timesyncd[1394]: Initial clock synchronization to Wed 2025-07-09 09:55:59.008886 UTC. Jul 9 09:55:58.903856 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 09:55:58.905755 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 09:55:58.928624 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 9 09:55:58.934725 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 9 09:55:58.946837 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 09:55:58.957128 lvm[1421]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 9 09:55:58.997178 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 9 09:55:58.998446 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 09:55:59.000675 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 09:55:59.001533 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 09:55:59.002506 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 09:55:59.003753 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 09:55:59.004691 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 09:55:59.005871 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 09:55:59.006897 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 09:55:59.006930 systemd[1]: Reached target paths.target - Path Units. Jul 9 09:55:59.007654 systemd[1]: Reached target timers.target - Timer Units. Jul 9 09:55:59.009445 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 09:55:59.011731 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 09:55:59.015275 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 09:55:59.016606 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 09:55:59.017597 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 09:55:59.021504 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 09:55:59.023124 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 09:55:59.025359 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 9 09:55:59.027010 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 09:55:59.027995 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 09:55:59.028803 systemd[1]: Reached target basic.target - Basic System. Jul 9 09:55:59.029584 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 09:55:59.029622 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 09:55:59.030541 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 09:55:59.032421 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 09:55:59.032675 lvm[1428]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 9 09:55:59.036691 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 09:55:59.038547 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 09:55:59.039543 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 09:55:59.042809 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 09:55:59.044583 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 09:55:59.047778 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 09:55:59.049870 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 09:55:59.066994 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 09:55:59.068678 jq[1431]: false Jul 9 09:55:59.068886 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 09:55:59.069389 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 09:55:59.070517 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 09:55:59.075714 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 09:55:59.077855 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 9 09:55:59.082054 extend-filesystems[1432]: Found loop3 Jul 9 09:55:59.082054 extend-filesystems[1432]: Found loop4 Jul 9 09:55:59.082054 extend-filesystems[1432]: Found loop5 Jul 9 09:55:59.082054 extend-filesystems[1432]: Found vda Jul 9 09:55:59.082054 extend-filesystems[1432]: Found vda1 Jul 9 09:55:59.082054 extend-filesystems[1432]: Found vda2 Jul 9 09:55:59.082054 extend-filesystems[1432]: Found vda3 Jul 9 09:55:59.082054 extend-filesystems[1432]: Found usr Jul 9 09:55:59.082054 extend-filesystems[1432]: Found vda4 Jul 9 09:55:59.082054 extend-filesystems[1432]: Found vda6 Jul 9 09:55:59.082054 extend-filesystems[1432]: Found vda7 Jul 9 09:55:59.082054 extend-filesystems[1432]: Found vda9 Jul 9 09:55:59.082054 extend-filesystems[1432]: Checking size of /dev/vda9 Jul 9 09:55:59.097532 dbus-daemon[1430]: [system] SELinux support is enabled Jul 9 09:55:59.088974 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 09:55:59.117408 jq[1446]: true Jul 9 09:55:59.089161 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 09:55:59.091391 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 09:55:59.119754 tar[1451]: linux-arm64/LICENSE Jul 9 09:55:59.119754 tar[1451]: linux-arm64/helm Jul 9 09:55:59.091625 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 09:55:59.098097 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 09:55:59.106523 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 09:55:59.106935 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 09:55:59.108769 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 09:55:59.108789 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 09:55:59.111149 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 09:55:59.112053 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 09:55:59.112276 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 09:55:59.131423 extend-filesystems[1432]: Resized partition /dev/vda9 Jul 9 09:55:59.132777 update_engine[1444]: I20250709 09:55:59.131978 1444 main.cc:92] Flatcar Update Engine starting Jul 9 09:55:59.142128 extend-filesystems[1466]: resize2fs 1.47.1 (20-May-2024) Jul 9 09:55:59.144840 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 9 09:55:59.146018 systemd[1]: Started update-engine.service - Update Engine. Jul 9 09:55:59.158496 update_engine[1444]: I20250709 09:55:59.157528 1444 update_check_scheduler.cc:74] Next update check in 5m28s Jul 9 09:55:59.158533 jq[1457]: true Jul 9 09:55:59.169960 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1375) Jul 9 09:55:59.169930 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 09:55:59.177587 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 9 09:55:59.198967 systemd-logind[1443]: Watching system buttons on /dev/input/event0 (Power Button) Jul 9 09:55:59.200056 extend-filesystems[1466]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 9 09:55:59.200056 extend-filesystems[1466]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 09:55:59.200056 extend-filesystems[1466]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 9 09:55:59.209778 extend-filesystems[1432]: Resized filesystem in /dev/vda9 Jul 9 09:55:59.201417 systemd-logind[1443]: New seat seat0. Jul 9 09:55:59.202343 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 09:55:59.203682 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 09:55:59.210305 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 09:55:59.259906 locksmithd[1467]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 09:55:59.269960 bash[1491]: Updated "/home/core/.ssh/authorized_keys" Jul 9 09:55:59.271483 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 09:55:59.273385 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 09:55:59.380844 containerd[1455]: time="2025-07-09T09:55:59.379728392Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 9 09:55:59.411340 containerd[1455]: time="2025-07-09T09:55:59.411288830Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 9 09:55:59.412868 containerd[1455]: time="2025-07-09T09:55:59.412807155Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 9 09:55:59.412868 containerd[1455]: time="2025-07-09T09:55:59.412849685Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 9 09:55:59.412868 containerd[1455]: time="2025-07-09T09:55:59.412868712Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 9 09:55:59.413055 containerd[1455]: time="2025-07-09T09:55:59.413027906Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 9 09:55:59.413055 containerd[1455]: time="2025-07-09T09:55:59.413051489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 9 09:55:59.413124 containerd[1455]: time="2025-07-09T09:55:59.413112239Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 09:55:59.413144 containerd[1455]: time="2025-07-09T09:55:59.413125583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 9 09:55:59.413509 containerd[1455]: time="2025-07-09T09:55:59.413450420Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 09:55:59.413509 containerd[1455]: time="2025-07-09T09:55:59.413479969Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 9 09:55:59.413509 containerd[1455]: time="2025-07-09T09:55:59.413499762Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 09:55:59.413618 containerd[1455]: time="2025-07-09T09:55:59.413515081Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 9 09:55:59.413618 containerd[1455]: time="2025-07-09T09:55:59.413610661Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 9 09:55:59.413823 containerd[1455]: time="2025-07-09T09:55:59.413805975Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 9 09:55:59.413964 containerd[1455]: time="2025-07-09T09:55:59.413946060Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 09:55:59.413997 containerd[1455]: time="2025-07-09T09:55:59.413963637Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 9 09:55:59.414064 containerd[1455]: time="2025-07-09T09:55:59.414046559Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 9 09:55:59.414107 containerd[1455]: time="2025-07-09T09:55:59.414095458Z" level=info msg="metadata content store policy set" policy=shared Jul 9 09:55:59.421244 containerd[1455]: time="2025-07-09T09:55:59.421199305Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 9 09:55:59.421334 containerd[1455]: time="2025-07-09T09:55:59.421265740Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 9 09:55:59.421334 containerd[1455]: time="2025-07-09T09:55:59.421283558Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 9 09:55:59.421334 containerd[1455]: time="2025-07-09T09:55:59.421301134Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 9 09:55:59.421334 containerd[1455]: time="2025-07-09T09:55:59.421316090Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 9 09:55:59.421509 containerd[1455]: time="2025-07-09T09:55:59.421490239Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 9 09:55:59.421762 containerd[1455]: time="2025-07-09T09:55:59.421744570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 9 09:55:59.421878 containerd[1455]: time="2025-07-09T09:55:59.421860750Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 9 09:55:59.421901 containerd[1455]: time="2025-07-09T09:55:59.421880665Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 9 09:55:59.421901 containerd[1455]: time="2025-07-09T09:55:59.421895258Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 9 09:55:59.421944 containerd[1455]: time="2025-07-09T09:55:59.421910415Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 9 09:55:59.421944 containerd[1455]: time="2025-07-09T09:55:59.421934643Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 9 09:55:59.421978 containerd[1455]: time="2025-07-09T09:55:59.421947261Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 9 09:55:59.421978 containerd[1455]: time="2025-07-09T09:55:59.421962579Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 9 09:55:59.422024 containerd[1455]: time="2025-07-09T09:55:59.421978624Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 9 09:55:59.422024 containerd[1455]: time="2025-07-09T09:55:59.421994346Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 9 09:55:59.422024 containerd[1455]: time="2025-07-09T09:55:59.422006923Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 9 09:55:59.422024 containerd[1455]: time="2025-07-09T09:55:59.422018211Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 9 09:55:59.422085 containerd[1455]: time="2025-07-09T09:55:59.422039133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422085 containerd[1455]: time="2025-07-09T09:55:59.422053847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422085 containerd[1455]: time="2025-07-09T09:55:59.422067029Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422085 containerd[1455]: time="2025-07-09T09:55:59.422079445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422157 containerd[1455]: time="2025-07-09T09:55:59.422091257Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422157 containerd[1455]: time="2025-07-09T09:55:59.422103592Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422157 containerd[1455]: time="2025-07-09T09:55:59.422118669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422157 containerd[1455]: time="2025-07-09T09:55:59.422131851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422157 containerd[1455]: time="2025-07-09T09:55:59.422144549Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422243 containerd[1455]: time="2025-07-09T09:55:59.422161239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422243 containerd[1455]: time="2025-07-09T09:55:59.422173453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422243 containerd[1455]: time="2025-07-09T09:55:59.422186071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422243 containerd[1455]: time="2025-07-09T09:55:59.422198125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422243 containerd[1455]: time="2025-07-09T09:55:59.422216991Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 9 09:55:59.422243 containerd[1455]: time="2025-07-09T09:55:59.422238477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422341 containerd[1455]: time="2025-07-09T09:55:59.422252143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422341 containerd[1455]: time="2025-07-09T09:55:59.422263471Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 9 09:55:59.422492 containerd[1455]: time="2025-07-09T09:55:59.422445159Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 9 09:55:59.422492 containerd[1455]: time="2025-07-09T09:55:59.422467855Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 9 09:55:59.422492 containerd[1455]: time="2025-07-09T09:55:59.422478094Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 9 09:55:59.422590 containerd[1455]: time="2025-07-09T09:55:59.422500427Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 9 09:55:59.422590 containerd[1455]: time="2025-07-09T09:55:59.422510505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422590 containerd[1455]: time="2025-07-09T09:55:59.422522599Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 9 09:55:59.422590 containerd[1455]: time="2025-07-09T09:55:59.422532596Z" level=info msg="NRI interface is disabled by configuration." Jul 9 09:55:59.422590 containerd[1455]: time="2025-07-09T09:55:59.422542957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 9 09:55:59.422942 containerd[1455]: time="2025-07-09T09:55:59.422888031Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 9 09:55:59.422942 containerd[1455]: time="2025-07-09T09:55:59.422946040Z" level=info msg="Connect containerd service" Jul 9 09:55:59.423076 containerd[1455]: time="2025-07-09T09:55:59.422983652Z" level=info msg="using legacy CRI server" Jul 9 09:55:59.423076 containerd[1455]: time="2025-07-09T09:55:59.422990787Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 09:55:59.423245 containerd[1455]: time="2025-07-09T09:55:59.423226292Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 9 09:55:59.424153 containerd[1455]: time="2025-07-09T09:55:59.424126951Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 09:55:59.426428 containerd[1455]: time="2025-07-09T09:55:59.424441307Z" level=info msg="Start subscribing containerd event" Jul 9 09:55:59.426428 containerd[1455]: time="2025-07-09T09:55:59.424504879Z" level=info msg="Start recovering state" Jul 9 09:55:59.426428 containerd[1455]: time="2025-07-09T09:55:59.424592075Z" level=info msg="Start event monitor" Jul 9 09:55:59.426428 containerd[1455]: time="2025-07-09T09:55:59.424766870Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 09:55:59.426428 containerd[1455]: time="2025-07-09T09:55:59.424819638Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 09:55:59.427483 containerd[1455]: time="2025-07-09T09:55:59.427445022Z" level=info msg="Start snapshots syncer" Jul 9 09:55:59.427483 containerd[1455]: time="2025-07-09T09:55:59.427476708Z" level=info msg="Start cni network conf syncer for default" Jul 9 09:55:59.427483 containerd[1455]: time="2025-07-09T09:55:59.427485859Z" level=info msg="Start streaming server" Jul 9 09:55:59.428006 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 09:55:59.429605 containerd[1455]: time="2025-07-09T09:55:59.429573274Z" level=info msg="containerd successfully booted in 0.051147s" Jul 9 09:55:59.559172 tar[1451]: linux-arm64/README.md Jul 9 09:55:59.571954 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 09:55:59.671560 sshd_keygen[1441]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 09:55:59.690577 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 09:55:59.702861 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 09:55:59.708396 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 09:55:59.708649 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 09:55:59.711399 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 09:55:59.723789 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 09:55:59.735942 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 09:55:59.738194 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 9 09:55:59.739305 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 09:56:00.310093 systemd-networkd[1392]: eth0: Gained IPv6LL Jul 9 09:56:00.312780 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 09:56:00.314455 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 09:56:00.334863 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 9 09:56:00.337406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:56:00.339632 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 09:56:00.356475 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 09:56:00.356780 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 9 09:56:00.358406 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 09:56:00.365121 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 09:56:00.936580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:56:00.937915 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 09:56:00.939056 systemd[1]: Startup finished in 536ms (kernel) + 5.531s (initrd) + 3.746s (userspace) = 9.814s. Jul 9 09:56:00.940306 (kubelet)[1544]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 09:56:01.378165 kubelet[1544]: E0709 09:56:01.378016 1544 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 09:56:01.380922 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 09:56:01.381130 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 09:56:01.381797 systemd[1]: kubelet.service: Consumed 821ms CPU time, 257.2M memory peak. Jul 9 09:56:04.611986 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 09:56:04.613241 systemd[1]: Started sshd@0-10.0.0.36:22-10.0.0.1:47348.service - OpenSSH per-connection server daemon (10.0.0.1:47348). Jul 9 09:56:04.676237 sshd[1558]: Accepted publickey for core from 10.0.0.1 port 47348 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:56:04.677857 sshd-session[1558]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:56:04.692677 systemd-logind[1443]: New session 1 of user core. Jul 9 09:56:04.693560 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 09:56:04.704170 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 09:56:04.713132 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 09:56:04.716393 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 09:56:04.722477 (systemd)[1562]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 09:56:04.724920 systemd-logind[1443]: New session c1 of user core. Jul 9 09:56:04.838544 systemd[1562]: Queued start job for default target default.target. Jul 9 09:56:04.848535 systemd[1562]: Created slice app.slice - User Application Slice. Jul 9 09:56:04.848581 systemd[1562]: Reached target paths.target - Paths. Jul 9 09:56:04.848623 systemd[1562]: Reached target timers.target - Timers. Jul 9 09:56:04.849883 systemd[1562]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 09:56:04.858963 systemd[1562]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 09:56:04.859029 systemd[1562]: Reached target sockets.target - Sockets. Jul 9 09:56:04.859065 systemd[1562]: Reached target basic.target - Basic System. Jul 9 09:56:04.859095 systemd[1562]: Reached target default.target - Main User Target. Jul 9 09:56:04.859121 systemd[1562]: Startup finished in 128ms. Jul 9 09:56:04.859295 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 09:56:04.860696 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 09:56:04.926006 systemd[1]: Started sshd@1-10.0.0.36:22-10.0.0.1:47354.service - OpenSSH per-connection server daemon (10.0.0.1:47354). Jul 9 09:56:04.974456 sshd[1573]: Accepted publickey for core from 10.0.0.1 port 47354 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:56:04.975826 sshd-session[1573]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:56:04.980304 systemd-logind[1443]: New session 2 of user core. Jul 9 09:56:04.995749 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 09:56:05.048054 sshd[1575]: Connection closed by 10.0.0.1 port 47354 Jul 9 09:56:05.048709 sshd-session[1573]: pam_unix(sshd:session): session closed for user core Jul 9 09:56:05.058691 systemd[1]: sshd@1-10.0.0.36:22-10.0.0.1:47354.service: Deactivated successfully. Jul 9 09:56:05.060115 systemd[1]: session-2.scope: Deactivated successfully. Jul 9 09:56:05.060777 systemd-logind[1443]: Session 2 logged out. Waiting for processes to exit. Jul 9 09:56:05.073965 systemd[1]: Started sshd@2-10.0.0.36:22-10.0.0.1:47356.service - OpenSSH per-connection server daemon (10.0.0.1:47356). Jul 9 09:56:05.075019 systemd-logind[1443]: Removed session 2. Jul 9 09:56:05.116725 sshd[1580]: Accepted publickey for core from 10.0.0.1 port 47356 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:56:05.118062 sshd-session[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:56:05.123195 systemd-logind[1443]: New session 3 of user core. Jul 9 09:56:05.133735 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 09:56:05.182140 sshd[1583]: Connection closed by 10.0.0.1 port 47356 Jul 9 09:56:05.182569 sshd-session[1580]: pam_unix(sshd:session): session closed for user core Jul 9 09:56:05.196738 systemd[1]: sshd@2-10.0.0.36:22-10.0.0.1:47356.service: Deactivated successfully. Jul 9 09:56:05.198192 systemd[1]: session-3.scope: Deactivated successfully. Jul 9 09:56:05.199433 systemd-logind[1443]: Session 3 logged out. Waiting for processes to exit. Jul 9 09:56:05.204800 systemd[1]: Started sshd@3-10.0.0.36:22-10.0.0.1:47370.service - OpenSSH per-connection server daemon (10.0.0.1:47370). Jul 9 09:56:05.205728 systemd-logind[1443]: Removed session 3. Jul 9 09:56:05.246151 sshd[1588]: Accepted publickey for core from 10.0.0.1 port 47370 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:56:05.247465 sshd-session[1588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:56:05.252039 systemd-logind[1443]: New session 4 of user core. Jul 9 09:56:05.263732 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 09:56:05.315360 sshd[1591]: Connection closed by 10.0.0.1 port 47370 Jul 9 09:56:05.315833 sshd-session[1588]: pam_unix(sshd:session): session closed for user core Jul 9 09:56:05.325722 systemd[1]: sshd@3-10.0.0.36:22-10.0.0.1:47370.service: Deactivated successfully. Jul 9 09:56:05.327113 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 09:56:05.329703 systemd-logind[1443]: Session 4 logged out. Waiting for processes to exit. Jul 9 09:56:05.331408 systemd[1]: Started sshd@4-10.0.0.36:22-10.0.0.1:47378.service - OpenSSH per-connection server daemon (10.0.0.1:47378). Jul 9 09:56:05.332191 systemd-logind[1443]: Removed session 4. Jul 9 09:56:05.373929 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 47378 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:56:05.375213 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:56:05.379678 systemd-logind[1443]: New session 5 of user core. Jul 9 09:56:05.390734 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 09:56:05.450781 sudo[1600]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 09:56:05.451088 sudo[1600]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 09:56:05.468699 sudo[1600]: pam_unix(sudo:session): session closed for user root Jul 9 09:56:05.470617 sshd[1599]: Connection closed by 10.0.0.1 port 47378 Jul 9 09:56:05.471303 sshd-session[1596]: pam_unix(sshd:session): session closed for user core Jul 9 09:56:05.487053 systemd[1]: sshd@4-10.0.0.36:22-10.0.0.1:47378.service: Deactivated successfully. Jul 9 09:56:05.489950 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 09:56:05.490720 systemd-logind[1443]: Session 5 logged out. Waiting for processes to exit. Jul 9 09:56:05.505894 systemd[1]: Started sshd@5-10.0.0.36:22-10.0.0.1:47380.service - OpenSSH per-connection server daemon (10.0.0.1:47380). Jul 9 09:56:05.506830 systemd-logind[1443]: Removed session 5. Jul 9 09:56:05.545962 sshd[1605]: Accepted publickey for core from 10.0.0.1 port 47380 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:56:05.547153 sshd-session[1605]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:56:05.551616 systemd-logind[1443]: New session 6 of user core. Jul 9 09:56:05.565735 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 09:56:05.617531 sudo[1610]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 09:56:05.617828 sudo[1610]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 09:56:05.620800 sudo[1610]: pam_unix(sudo:session): session closed for user root Jul 9 09:56:05.625776 sudo[1609]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 09:56:05.626071 sudo[1609]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 09:56:05.642867 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 09:56:05.667235 augenrules[1632]: No rules Jul 9 09:56:05.668658 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 09:56:05.669629 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 09:56:05.670928 sudo[1609]: pam_unix(sudo:session): session closed for user root Jul 9 09:56:05.672110 sshd[1608]: Connection closed by 10.0.0.1 port 47380 Jul 9 09:56:05.672464 sshd-session[1605]: pam_unix(sshd:session): session closed for user core Jul 9 09:56:05.688771 systemd[1]: sshd@5-10.0.0.36:22-10.0.0.1:47380.service: Deactivated successfully. Jul 9 09:56:05.690144 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 09:56:05.690841 systemd-logind[1443]: Session 6 logged out. Waiting for processes to exit. Jul 9 09:56:05.709863 systemd[1]: Started sshd@6-10.0.0.36:22-10.0.0.1:47394.service - OpenSSH per-connection server daemon (10.0.0.1:47394). Jul 9 09:56:05.711828 systemd-logind[1443]: Removed session 6. Jul 9 09:56:05.755567 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 47394 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:56:05.756854 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:56:05.761625 systemd-logind[1443]: New session 7 of user core. Jul 9 09:56:05.772783 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 09:56:05.824723 sudo[1644]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 09:56:05.825011 sudo[1644]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 09:56:06.162829 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 09:56:06.162903 (dockerd)[1664]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 09:56:06.417854 dockerd[1664]: time="2025-07-09T09:56:06.417722918Z" level=info msg="Starting up" Jul 9 09:56:06.576292 dockerd[1664]: time="2025-07-09T09:56:06.576193448Z" level=info msg="Loading containers: start." Jul 9 09:56:06.716709 kernel: Initializing XFRM netlink socket Jul 9 09:56:06.786043 systemd-networkd[1392]: docker0: Link UP Jul 9 09:56:06.816867 dockerd[1664]: time="2025-07-09T09:56:06.816759953Z" level=info msg="Loading containers: done." Jul 9 09:56:06.828253 dockerd[1664]: time="2025-07-09T09:56:06.828198529Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 09:56:06.828384 dockerd[1664]: time="2025-07-09T09:56:06.828283630Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 9 09:56:06.828479 dockerd[1664]: time="2025-07-09T09:56:06.828448244Z" level=info msg="Daemon has completed initialization" Jul 9 09:56:06.856258 dockerd[1664]: time="2025-07-09T09:56:06.856182942Z" level=info msg="API listen on /run/docker.sock" Jul 9 09:56:06.856357 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 09:56:07.308001 containerd[1455]: time="2025-07-09T09:56:07.307960044Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 9 09:56:07.877180 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2785697763.mount: Deactivated successfully. Jul 9 09:56:08.890978 containerd[1455]: time="2025-07-09T09:56:08.890931787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:08.891803 containerd[1455]: time="2025-07-09T09:56:08.891328994Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 9 09:56:08.894206 containerd[1455]: time="2025-07-09T09:56:08.892668863Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:08.896462 containerd[1455]: time="2025-07-09T09:56:08.895894212Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:08.897243 containerd[1455]: time="2025-07-09T09:56:08.897212346Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.589210108s" Jul 9 09:56:08.897243 containerd[1455]: time="2025-07-09T09:56:08.897247659Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 9 09:56:08.900368 containerd[1455]: time="2025-07-09T09:56:08.900335812Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 9 09:56:10.115768 containerd[1455]: time="2025-07-09T09:56:10.115705546Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:10.116263 containerd[1455]: time="2025-07-09T09:56:10.116218427Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 9 09:56:10.117084 containerd[1455]: time="2025-07-09T09:56:10.117059484Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:10.119846 containerd[1455]: time="2025-07-09T09:56:10.119817193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:10.122056 containerd[1455]: time="2025-07-09T09:56:10.122019175Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.221648898s" Jul 9 09:56:10.122056 containerd[1455]: time="2025-07-09T09:56:10.122050656Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 9 09:56:10.122569 containerd[1455]: time="2025-07-09T09:56:10.122509732Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 9 09:56:11.221494 containerd[1455]: time="2025-07-09T09:56:11.221432304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:11.222967 containerd[1455]: time="2025-07-09T09:56:11.222728674Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 9 09:56:11.223626 containerd[1455]: time="2025-07-09T09:56:11.223595503Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:11.226567 containerd[1455]: time="2025-07-09T09:56:11.226519431Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:11.227684 containerd[1455]: time="2025-07-09T09:56:11.227609460Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.10503147s" Jul 9 09:56:11.227684 containerd[1455]: time="2025-07-09T09:56:11.227640853Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 9 09:56:11.228118 containerd[1455]: time="2025-07-09T09:56:11.228094159Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 9 09:56:11.631467 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 09:56:11.642744 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:56:11.743516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:56:11.748022 (kubelet)[1933]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 09:56:11.792569 kubelet[1933]: E0709 09:56:11.792456 1933 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 09:56:11.796014 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 09:56:11.796154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 09:56:11.796532 systemd[1]: kubelet.service: Consumed 142ms CPU time, 109.8M memory peak. Jul 9 09:56:12.294962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1280065990.mount: Deactivated successfully. Jul 9 09:56:12.721001 containerd[1455]: time="2025-07-09T09:56:12.720864494Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:12.721466 containerd[1455]: time="2025-07-09T09:56:12.721420524Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 9 09:56:12.722271 containerd[1455]: time="2025-07-09T09:56:12.722230890Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:12.724175 containerd[1455]: time="2025-07-09T09:56:12.724129957Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:12.725029 containerd[1455]: time="2025-07-09T09:56:12.724976926Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.496850774s" Jul 9 09:56:12.725029 containerd[1455]: time="2025-07-09T09:56:12.725012887Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 9 09:56:12.725447 containerd[1455]: time="2025-07-09T09:56:12.725415160Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 9 09:56:13.304865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3011755093.mount: Deactivated successfully. Jul 9 09:56:14.065023 containerd[1455]: time="2025-07-09T09:56:14.064970800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:14.066065 containerd[1455]: time="2025-07-09T09:56:14.065745210Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 9 09:56:14.066864 containerd[1455]: time="2025-07-09T09:56:14.066821914Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:14.069984 containerd[1455]: time="2025-07-09T09:56:14.069932353Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:14.071714 containerd[1455]: time="2025-07-09T09:56:14.071616012Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.346170234s" Jul 9 09:56:14.071714 containerd[1455]: time="2025-07-09T09:56:14.071653804Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 9 09:56:14.072309 containerd[1455]: time="2025-07-09T09:56:14.072274921Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 09:56:14.533192 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount16085544.mount: Deactivated successfully. Jul 9 09:56:14.536760 containerd[1455]: time="2025-07-09T09:56:14.536713057Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:14.537365 containerd[1455]: time="2025-07-09T09:56:14.537312229Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 9 09:56:14.538728 containerd[1455]: time="2025-07-09T09:56:14.538691428Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:14.540896 containerd[1455]: time="2025-07-09T09:56:14.540851897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:14.542104 containerd[1455]: time="2025-07-09T09:56:14.541862125Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 469.543796ms" Jul 9 09:56:14.542104 containerd[1455]: time="2025-07-09T09:56:14.541893538Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 9 09:56:14.542337 containerd[1455]: time="2025-07-09T09:56:14.542312416Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 9 09:56:15.032592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount508968476.mount: Deactivated successfully. Jul 9 09:56:16.690806 containerd[1455]: time="2025-07-09T09:56:16.690750257Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:16.691710 containerd[1455]: time="2025-07-09T09:56:16.691450838Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 9 09:56:16.692671 containerd[1455]: time="2025-07-09T09:56:16.692641855Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:16.695872 containerd[1455]: time="2025-07-09T09:56:16.695837645Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:16.697345 containerd[1455]: time="2025-07-09T09:56:16.697305983Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.154965125s" Jul 9 09:56:16.697563 containerd[1455]: time="2025-07-09T09:56:16.697446388Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 9 09:56:21.902842 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 9 09:56:21.915227 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:56:21.931009 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 9 09:56:21.931084 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 9 09:56:21.931685 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:56:21.952976 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:56:21.997437 systemd[1]: Reload requested from client PID 2096 ('systemctl') (unit session-7.scope)... Jul 9 09:56:21.997453 systemd[1]: Reloading... Jul 9 09:56:22.071300 zram_generator::config[2140]: No configuration found. Jul 9 09:56:22.309813 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 09:56:22.384140 systemd[1]: Reloading finished in 386 ms. Jul 9 09:56:22.421738 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:56:22.424790 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:56:22.425485 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 09:56:22.425735 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:56:22.425779 systemd[1]: kubelet.service: Consumed 82ms CPU time, 94.9M memory peak. Jul 9 09:56:22.427302 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:56:22.540326 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:56:22.544431 (kubelet)[2187]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 09:56:22.583782 kubelet[2187]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 09:56:22.583782 kubelet[2187]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 09:56:22.583782 kubelet[2187]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 09:56:22.584098 kubelet[2187]: I0709 09:56:22.583788 2187 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 09:56:22.844930 kubelet[2187]: I0709 09:56:22.844813 2187 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 9 09:56:22.844930 kubelet[2187]: I0709 09:56:22.844844 2187 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 09:56:22.845307 kubelet[2187]: I0709 09:56:22.845056 2187 server.go:956] "Client rotation is on, will bootstrap in background" Jul 9 09:56:22.878328 kubelet[2187]: E0709 09:56:22.878291 2187 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 9 09:56:22.882016 kubelet[2187]: I0709 09:56:22.881646 2187 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 09:56:22.893585 kubelet[2187]: E0709 09:56:22.893333 2187 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 9 09:56:22.893585 kubelet[2187]: I0709 09:56:22.893385 2187 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 9 09:56:22.896561 kubelet[2187]: I0709 09:56:22.896531 2187 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 09:56:22.897626 kubelet[2187]: I0709 09:56:22.897580 2187 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 09:56:22.897790 kubelet[2187]: I0709 09:56:22.897625 2187 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 09:56:22.897879 kubelet[2187]: I0709 09:56:22.897851 2187 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 09:56:22.897879 kubelet[2187]: I0709 09:56:22.897863 2187 container_manager_linux.go:303] "Creating device plugin manager" Jul 9 09:56:22.898090 kubelet[2187]: I0709 09:56:22.898067 2187 state_mem.go:36] "Initialized new in-memory state store" Jul 9 09:56:22.900703 kubelet[2187]: I0709 09:56:22.900675 2187 kubelet.go:480] "Attempting to sync node with API server" Jul 9 09:56:22.900703 kubelet[2187]: I0709 09:56:22.900704 2187 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 09:56:22.900788 kubelet[2187]: I0709 09:56:22.900729 2187 kubelet.go:386] "Adding apiserver pod source" Jul 9 09:56:22.902157 kubelet[2187]: I0709 09:56:22.901767 2187 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 09:56:22.904226 kubelet[2187]: E0709 09:56:22.904166 2187 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 9 09:56:22.904907 kubelet[2187]: I0709 09:56:22.904882 2187 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 9 09:56:22.905540 kubelet[2187]: E0709 09:56:22.905505 2187 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 9 09:56:22.905726 kubelet[2187]: I0709 09:56:22.905698 2187 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 9 09:56:22.905831 kubelet[2187]: W0709 09:56:22.905821 2187 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 09:56:22.908351 kubelet[2187]: I0709 09:56:22.908334 2187 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 09:56:22.908418 kubelet[2187]: I0709 09:56:22.908375 2187 server.go:1289] "Started kubelet" Jul 9 09:56:22.908895 kubelet[2187]: I0709 09:56:22.908502 2187 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 09:56:22.908895 kubelet[2187]: I0709 09:56:22.908831 2187 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 09:56:22.909005 kubelet[2187]: I0709 09:56:22.908881 2187 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 09:56:22.909739 kubelet[2187]: I0709 09:56:22.909601 2187 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 09:56:22.910290 kubelet[2187]: I0709 09:56:22.910277 2187 server.go:317] "Adding debug handlers to kubelet server" Jul 9 09:56:22.911867 kubelet[2187]: E0709 09:56:22.911836 2187 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 09:56:22.911940 kubelet[2187]: I0709 09:56:22.911876 2187 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 09:56:22.912061 kubelet[2187]: I0709 09:56:22.912037 2187 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 09:56:22.912265 kubelet[2187]: I0709 09:56:22.912110 2187 reconciler.go:26] "Reconciler: start to sync state" Jul 9 09:56:22.913568 kubelet[2187]: I0709 09:56:22.912399 2187 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 09:56:22.913568 kubelet[2187]: E0709 09:56:22.912462 2187 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 9 09:56:22.913568 kubelet[2187]: E0709 09:56:22.913217 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="200ms" Jul 9 09:56:22.914203 kubelet[2187]: E0709 09:56:22.914180 2187 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 09:56:22.917518 kubelet[2187]: I0709 09:56:22.917430 2187 factory.go:223] Registration of the systemd container factory successfully Jul 9 09:56:22.917823 kubelet[2187]: I0709 09:56:22.917588 2187 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 09:56:22.918565 kubelet[2187]: I0709 09:56:22.918447 2187 factory.go:223] Registration of the containerd container factory successfully Jul 9 09:56:22.919676 kubelet[2187]: E0709 09:56:22.913297 2187 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18508cb963348731 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-09 09:56:22.908348209 +0000 UTC m=+0.360466515,LastTimestamp:2025-07-09 09:56:22.908348209 +0000 UTC m=+0.360466515,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 9 09:56:22.929049 kubelet[2187]: I0709 09:56:22.929024 2187 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 09:56:22.929184 kubelet[2187]: I0709 09:56:22.929172 2187 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 09:56:22.929251 kubelet[2187]: I0709 09:56:22.929240 2187 state_mem.go:36] "Initialized new in-memory state store" Jul 9 09:56:22.931067 kubelet[2187]: I0709 09:56:22.930974 2187 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 9 09:56:22.932294 kubelet[2187]: I0709 09:56:22.932252 2187 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 9 09:56:22.932294 kubelet[2187]: I0709 09:56:22.932284 2187 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 9 09:56:22.932405 kubelet[2187]: I0709 09:56:22.932303 2187 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 09:56:22.932405 kubelet[2187]: I0709 09:56:22.932312 2187 kubelet.go:2436] "Starting kubelet main sync loop" Jul 9 09:56:22.932405 kubelet[2187]: E0709 09:56:22.932365 2187 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 09:56:23.005669 kubelet[2187]: I0709 09:56:23.005628 2187 policy_none.go:49] "None policy: Start" Jul 9 09:56:23.005669 kubelet[2187]: I0709 09:56:23.005658 2187 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 09:56:23.005669 kubelet[2187]: I0709 09:56:23.005672 2187 state_mem.go:35] "Initializing new in-memory state store" Jul 9 09:56:23.006448 kubelet[2187]: E0709 09:56:23.006207 2187 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 9 09:56:23.011145 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 09:56:23.012506 kubelet[2187]: E0709 09:56:23.012462 2187 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 09:56:23.023768 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 09:56:23.026402 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 09:56:23.033245 kubelet[2187]: E0709 09:56:23.033204 2187 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 9 09:56:23.042380 kubelet[2187]: E0709 09:56:23.042343 2187 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 9 09:56:23.042595 kubelet[2187]: I0709 09:56:23.042574 2187 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 09:56:23.042636 kubelet[2187]: I0709 09:56:23.042592 2187 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 09:56:23.043125 kubelet[2187]: I0709 09:56:23.042835 2187 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 09:56:23.043711 kubelet[2187]: E0709 09:56:23.043687 2187 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 09:56:23.044503 kubelet[2187]: E0709 09:56:23.044474 2187 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 9 09:56:23.114075 kubelet[2187]: E0709 09:56:23.113959 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="400ms" Jul 9 09:56:23.144461 kubelet[2187]: I0709 09:56:23.144058 2187 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 09:56:23.144461 kubelet[2187]: E0709 09:56:23.144426 2187 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jul 9 09:56:23.248057 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 9 09:56:23.259390 kubelet[2187]: E0709 09:56:23.259344 2187 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 09:56:23.262175 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 9 09:56:23.264157 kubelet[2187]: E0709 09:56:23.263978 2187 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 09:56:23.265983 systemd[1]: Created slice kubepods-burstable-pod9291031337121f74cb0bf13dc25e3597.slice - libcontainer container kubepods-burstable-pod9291031337121f74cb0bf13dc25e3597.slice. Jul 9 09:56:23.267514 kubelet[2187]: E0709 09:56:23.267485 2187 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 09:56:23.314948 kubelet[2187]: I0709 09:56:23.314907 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 09:56:23.314948 kubelet[2187]: I0709 09:56:23.314946 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 09:56:23.315088 kubelet[2187]: I0709 09:56:23.314968 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 9 09:56:23.315088 kubelet[2187]: I0709 09:56:23.314982 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9291031337121f74cb0bf13dc25e3597-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9291031337121f74cb0bf13dc25e3597\") " pod="kube-system/kube-apiserver-localhost" Jul 9 09:56:23.315088 kubelet[2187]: I0709 09:56:23.315006 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9291031337121f74cb0bf13dc25e3597-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9291031337121f74cb0bf13dc25e3597\") " pod="kube-system/kube-apiserver-localhost" Jul 9 09:56:23.315088 kubelet[2187]: I0709 09:56:23.315022 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 09:56:23.315088 kubelet[2187]: I0709 09:56:23.315036 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 09:56:23.315201 kubelet[2187]: I0709 09:56:23.315053 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 09:56:23.315201 kubelet[2187]: I0709 09:56:23.315069 2187 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9291031337121f74cb0bf13dc25e3597-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9291031337121f74cb0bf13dc25e3597\") " pod="kube-system/kube-apiserver-localhost" Jul 9 09:56:23.346458 kubelet[2187]: I0709 09:56:23.346430 2187 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 09:56:23.346820 kubelet[2187]: E0709 09:56:23.346785 2187 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jul 9 09:56:23.514751 kubelet[2187]: E0709 09:56:23.514635 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="800ms" Jul 9 09:56:23.560061 kubelet[2187]: E0709 09:56:23.560027 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:23.560887 containerd[1455]: time="2025-07-09T09:56:23.560847109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 9 09:56:23.564487 kubelet[2187]: E0709 09:56:23.564397 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:23.565044 containerd[1455]: time="2025-07-09T09:56:23.564791323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 9 09:56:23.568479 kubelet[2187]: E0709 09:56:23.568456 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:23.570431 containerd[1455]: time="2025-07-09T09:56:23.570304856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9291031337121f74cb0bf13dc25e3597,Namespace:kube-system,Attempt:0,}" Jul 9 09:56:23.748508 kubelet[2187]: I0709 09:56:23.748472 2187 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 09:56:23.748859 kubelet[2187]: E0709 09:56:23.748784 2187 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jul 9 09:56:24.302983 kubelet[2187]: E0709 09:56:24.302933 2187 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 9 09:56:24.309883 kubelet[2187]: E0709 09:56:24.309846 2187 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 9 09:56:24.311820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3065537132.mount: Deactivated successfully. Jul 9 09:56:24.315800 kubelet[2187]: E0709 09:56:24.315761 2187 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="1.6s" Jul 9 09:56:24.316365 containerd[1455]: time="2025-07-09T09:56:24.316304840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 09:56:24.318251 containerd[1455]: time="2025-07-09T09:56:24.318201707Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" Jul 9 09:56:24.319024 containerd[1455]: time="2025-07-09T09:56:24.318983521Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 09:56:24.320349 containerd[1455]: time="2025-07-09T09:56:24.320315870Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 09:56:24.321494 containerd[1455]: time="2025-07-09T09:56:24.321459326Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 09:56:24.322305 containerd[1455]: time="2025-07-09T09:56:24.322268423Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 9 09:56:24.323032 containerd[1455]: time="2025-07-09T09:56:24.322999559Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 9 09:56:24.324680 containerd[1455]: time="2025-07-09T09:56:24.324646236Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 09:56:24.327397 containerd[1455]: time="2025-07-09T09:56:24.327346550Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 762.476177ms" Jul 9 09:56:24.328820 containerd[1455]: time="2025-07-09T09:56:24.328791755Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 767.847366ms" Jul 9 09:56:24.331468 containerd[1455]: time="2025-07-09T09:56:24.331343478Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 760.961175ms" Jul 9 09:56:24.386634 kubelet[2187]: E0709 09:56:24.386597 2187 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 9 09:56:24.468843 kubelet[2187]: E0709 09:56:24.468410 2187 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 9 09:56:24.471694 containerd[1455]: time="2025-07-09T09:56:24.471513789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 09:56:24.471694 containerd[1455]: time="2025-07-09T09:56:24.471614065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 09:56:24.471694 containerd[1455]: time="2025-07-09T09:56:24.471630971Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:24.472647 containerd[1455]: time="2025-07-09T09:56:24.472501042Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 09:56:24.472647 containerd[1455]: time="2025-07-09T09:56:24.472574757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 09:56:24.472738 containerd[1455]: time="2025-07-09T09:56:24.472655683Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:24.473826 containerd[1455]: time="2025-07-09T09:56:24.472590581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:24.473826 containerd[1455]: time="2025-07-09T09:56:24.473263747Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:24.474422 containerd[1455]: time="2025-07-09T09:56:24.474286616Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 09:56:24.474422 containerd[1455]: time="2025-07-09T09:56:24.474349994Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 09:56:24.474422 containerd[1455]: time="2025-07-09T09:56:24.474365178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:24.474935 containerd[1455]: time="2025-07-09T09:56:24.474755464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:24.498811 systemd[1]: Started cri-containerd-517438f1e4a37453b079f2a0b3a3be0dec2c81c764b27d646ebfe9e9f4e65ccd.scope - libcontainer container 517438f1e4a37453b079f2a0b3a3be0dec2c81c764b27d646ebfe9e9f4e65ccd. Jul 9 09:56:24.500294 systemd[1]: Started cri-containerd-85c97cf26919dff47c25766b849a598c11d5b793c00c9c69ac405b39e4bf3137.scope - libcontainer container 85c97cf26919dff47c25766b849a598c11d5b793c00c9c69ac405b39e4bf3137. Jul 9 09:56:24.501486 systemd[1]: Started cri-containerd-d00be04b0b2b8b12926da8e5dde68237a429b12f927ec0bec921094c5b0df20a.scope - libcontainer container d00be04b0b2b8b12926da8e5dde68237a429b12f927ec0bec921094c5b0df20a. Jul 9 09:56:24.532517 containerd[1455]: time="2025-07-09T09:56:24.532037914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"517438f1e4a37453b079f2a0b3a3be0dec2c81c764b27d646ebfe9e9f4e65ccd\"" Jul 9 09:56:24.533296 kubelet[2187]: E0709 09:56:24.533268 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:24.535221 containerd[1455]: time="2025-07-09T09:56:24.535178953Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:9291031337121f74cb0bf13dc25e3597,Namespace:kube-system,Attempt:0,} returns sandbox id \"d00be04b0b2b8b12926da8e5dde68237a429b12f927ec0bec921094c5b0df20a\"" Jul 9 09:56:24.536151 kubelet[2187]: E0709 09:56:24.536125 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:24.542012 containerd[1455]: time="2025-07-09T09:56:24.541973586Z" level=info msg="CreateContainer within sandbox \"517438f1e4a37453b079f2a0b3a3be0dec2c81c764b27d646ebfe9e9f4e65ccd\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 09:56:24.542536 containerd[1455]: time="2025-07-09T09:56:24.542515428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"85c97cf26919dff47c25766b849a598c11d5b793c00c9c69ac405b39e4bf3137\"" Jul 9 09:56:24.542638 containerd[1455]: time="2025-07-09T09:56:24.542607170Z" level=info msg="CreateContainer within sandbox \"d00be04b0b2b8b12926da8e5dde68237a429b12f927ec0bec921094c5b0df20a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 09:56:24.543134 kubelet[2187]: E0709 09:56:24.543107 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:24.546169 containerd[1455]: time="2025-07-09T09:56:24.546118544Z" level=info msg="CreateContainer within sandbox \"85c97cf26919dff47c25766b849a598c11d5b793c00c9c69ac405b39e4bf3137\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 09:56:24.550805 kubelet[2187]: I0709 09:56:24.550752 2187 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 09:56:24.551574 kubelet[2187]: E0709 09:56:24.551173 2187 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jul 9 09:56:24.562472 containerd[1455]: time="2025-07-09T09:56:24.562362814Z" level=info msg="CreateContainer within sandbox \"517438f1e4a37453b079f2a0b3a3be0dec2c81c764b27d646ebfe9e9f4e65ccd\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"7a47250d3da141e3a251eec1869dca4fc47182517a5502ab23a749d3439fc721\"" Jul 9 09:56:24.564166 containerd[1455]: time="2025-07-09T09:56:24.564107044Z" level=info msg="StartContainer for \"7a47250d3da141e3a251eec1869dca4fc47182517a5502ab23a749d3439fc721\"" Jul 9 09:56:24.564652 containerd[1455]: time="2025-07-09T09:56:24.564538313Z" level=info msg="CreateContainer within sandbox \"d00be04b0b2b8b12926da8e5dde68237a429b12f927ec0bec921094c5b0df20a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b6d0652a7e30e22d96cffc434877800dbafec8d10015aaf3e050913d2e5c5fc4\"" Jul 9 09:56:24.565052 containerd[1455]: time="2025-07-09T09:56:24.564971106Z" level=info msg="CreateContainer within sandbox \"85c97cf26919dff47c25766b849a598c11d5b793c00c9c69ac405b39e4bf3137\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"658771e9a778a35c257cd93cbca336fb2c746f32397a4e8fb92df88df6806fd3\"" Jul 9 09:56:24.565052 containerd[1455]: time="2025-07-09T09:56:24.564985208Z" level=info msg="StartContainer for \"b6d0652a7e30e22d96cffc434877800dbafec8d10015aaf3e050913d2e5c5fc4\"" Jul 9 09:56:24.565323 containerd[1455]: time="2025-07-09T09:56:24.565301098Z" level=info msg="StartContainer for \"658771e9a778a35c257cd93cbca336fb2c746f32397a4e8fb92df88df6806fd3\"" Jul 9 09:56:24.592711 systemd[1]: Started cri-containerd-7a47250d3da141e3a251eec1869dca4fc47182517a5502ab23a749d3439fc721.scope - libcontainer container 7a47250d3da141e3a251eec1869dca4fc47182517a5502ab23a749d3439fc721. Jul 9 09:56:24.596108 systemd[1]: Started cri-containerd-658771e9a778a35c257cd93cbca336fb2c746f32397a4e8fb92df88df6806fd3.scope - libcontainer container 658771e9a778a35c257cd93cbca336fb2c746f32397a4e8fb92df88df6806fd3. Jul 9 09:56:24.597432 systemd[1]: Started cri-containerd-b6d0652a7e30e22d96cffc434877800dbafec8d10015aaf3e050913d2e5c5fc4.scope - libcontainer container b6d0652a7e30e22d96cffc434877800dbafec8d10015aaf3e050913d2e5c5fc4. Jul 9 09:56:24.637670 containerd[1455]: time="2025-07-09T09:56:24.636583533Z" level=info msg="StartContainer for \"b6d0652a7e30e22d96cffc434877800dbafec8d10015aaf3e050913d2e5c5fc4\" returns successfully" Jul 9 09:56:24.649138 containerd[1455]: time="2025-07-09T09:56:24.648090245Z" level=info msg="StartContainer for \"7a47250d3da141e3a251eec1869dca4fc47182517a5502ab23a749d3439fc721\" returns successfully" Jul 9 09:56:24.649138 containerd[1455]: time="2025-07-09T09:56:24.648163399Z" level=info msg="StartContainer for \"658771e9a778a35c257cd93cbca336fb2c746f32397a4e8fb92df88df6806fd3\" returns successfully" Jul 9 09:56:24.950897 kubelet[2187]: E0709 09:56:24.950720 2187 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 09:56:24.952115 kubelet[2187]: E0709 09:56:24.951577 2187 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 09:56:24.952115 kubelet[2187]: E0709 09:56:24.951697 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:24.952115 kubelet[2187]: E0709 09:56:24.951984 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:24.952365 kubelet[2187]: E0709 09:56:24.952349 2187 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 09:56:24.952586 kubelet[2187]: E0709 09:56:24.952496 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:25.951281 kubelet[2187]: E0709 09:56:25.951116 2187 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 09:56:25.951715 kubelet[2187]: E0709 09:56:25.951611 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:25.952621 kubelet[2187]: E0709 09:56:25.952430 2187 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 09:56:25.952621 kubelet[2187]: E0709 09:56:25.952559 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:26.154772 kubelet[2187]: I0709 09:56:26.152536 2187 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 09:56:26.378865 kubelet[2187]: E0709 09:56:26.378595 2187 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 9 09:56:26.453564 kubelet[2187]: I0709 09:56:26.453512 2187 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 09:56:26.453671 kubelet[2187]: E0709 09:56:26.453601 2187 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 9 09:56:26.464050 kubelet[2187]: E0709 09:56:26.463989 2187 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 09:56:26.564853 kubelet[2187]: E0709 09:56:26.564809 2187 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 09:56:26.665464 kubelet[2187]: E0709 09:56:26.665356 2187 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 09:56:26.713086 kubelet[2187]: I0709 09:56:26.713046 2187 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 09:56:26.718058 kubelet[2187]: E0709 09:56:26.717977 2187 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 9 09:56:26.718058 kubelet[2187]: I0709 09:56:26.718007 2187 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 09:56:26.719706 kubelet[2187]: E0709 09:56:26.719674 2187 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 9 09:56:26.719706 kubelet[2187]: I0709 09:56:26.719698 2187 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 09:56:26.723796 kubelet[2187]: E0709 09:56:26.723758 2187 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 9 09:56:26.906152 kubelet[2187]: I0709 09:56:26.906103 2187 apiserver.go:52] "Watching apiserver" Jul 9 09:56:26.913095 kubelet[2187]: I0709 09:56:26.913061 2187 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 09:56:26.951708 kubelet[2187]: I0709 09:56:26.951677 2187 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 09:56:26.952143 kubelet[2187]: I0709 09:56:26.951741 2187 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 09:56:26.953681 kubelet[2187]: E0709 09:56:26.953646 2187 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 9 09:56:26.953840 kubelet[2187]: E0709 09:56:26.953817 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:26.954197 kubelet[2187]: E0709 09:56:26.954169 2187 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 9 09:56:26.954310 kubelet[2187]: E0709 09:56:26.954296 2187 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:28.510436 systemd[1]: Reload requested from client PID 2479 ('systemctl') (unit session-7.scope)... Jul 9 09:56:28.510453 systemd[1]: Reloading... Jul 9 09:56:28.583587 zram_generator::config[2526]: No configuration found. Jul 9 09:56:28.787595 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 09:56:28.875772 systemd[1]: Reloading finished in 365 ms. Jul 9 09:56:28.895476 kubelet[2187]: I0709 09:56:28.895401 2187 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 09:56:28.895569 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:56:28.908494 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 09:56:28.908793 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:56:28.908863 systemd[1]: kubelet.service: Consumed 736ms CPU time, 133.5M memory peak. Jul 9 09:56:28.917845 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 09:56:29.020399 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 09:56:29.025295 (kubelet)[2565]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 09:56:29.063404 kubelet[2565]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 09:56:29.063404 kubelet[2565]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 09:56:29.063404 kubelet[2565]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 09:56:29.063741 kubelet[2565]: I0709 09:56:29.063387 2565 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 09:56:29.068732 kubelet[2565]: I0709 09:56:29.068695 2565 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 9 09:56:29.068732 kubelet[2565]: I0709 09:56:29.068723 2565 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 09:56:29.068959 kubelet[2565]: I0709 09:56:29.068935 2565 server.go:956] "Client rotation is on, will bootstrap in background" Jul 9 09:56:29.070378 kubelet[2565]: I0709 09:56:29.070335 2565 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 9 09:56:29.072622 kubelet[2565]: I0709 09:56:29.072532 2565 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 09:56:29.077083 kubelet[2565]: E0709 09:56:29.077048 2565 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 9 09:56:29.077083 kubelet[2565]: I0709 09:56:29.077080 2565 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 9 09:56:29.079993 kubelet[2565]: I0709 09:56:29.079968 2565 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 09:56:29.080216 kubelet[2565]: I0709 09:56:29.080174 2565 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 09:56:29.080378 kubelet[2565]: I0709 09:56:29.080211 2565 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 09:56:29.080454 kubelet[2565]: I0709 09:56:29.080385 2565 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 09:56:29.080454 kubelet[2565]: I0709 09:56:29.080394 2565 container_manager_linux.go:303] "Creating device plugin manager" Jul 9 09:56:29.080454 kubelet[2565]: I0709 09:56:29.080438 2565 state_mem.go:36] "Initialized new in-memory state store" Jul 9 09:56:29.080667 kubelet[2565]: I0709 09:56:29.080648 2565 kubelet.go:480] "Attempting to sync node with API server" Jul 9 09:56:29.080667 kubelet[2565]: I0709 09:56:29.080667 2565 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 09:56:29.080715 kubelet[2565]: I0709 09:56:29.080691 2565 kubelet.go:386] "Adding apiserver pod source" Jul 9 09:56:29.080715 kubelet[2565]: I0709 09:56:29.080705 2565 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 09:56:29.081491 kubelet[2565]: I0709 09:56:29.081463 2565 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 9 09:56:29.082041 kubelet[2565]: I0709 09:56:29.082020 2565 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 9 09:56:29.084576 kubelet[2565]: I0709 09:56:29.083833 2565 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 09:56:29.084576 kubelet[2565]: I0709 09:56:29.083877 2565 server.go:1289] "Started kubelet" Jul 9 09:56:29.084667 kubelet[2565]: I0709 09:56:29.084617 2565 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 09:56:29.085490 kubelet[2565]: I0709 09:56:29.085456 2565 server.go:317] "Adding debug handlers to kubelet server" Jul 9 09:56:29.085566 kubelet[2565]: I0709 09:56:29.085527 2565 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 09:56:29.085879 kubelet[2565]: I0709 09:56:29.084615 2565 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 09:56:29.086139 kubelet[2565]: I0709 09:56:29.086105 2565 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 09:56:29.086360 kubelet[2565]: I0709 09:56:29.086344 2565 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 09:56:29.087874 kubelet[2565]: I0709 09:56:29.087839 2565 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 09:56:29.088108 kubelet[2565]: E0709 09:56:29.088074 2565 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 09:56:29.088841 kubelet[2565]: I0709 09:56:29.088801 2565 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 09:56:29.090599 kubelet[2565]: I0709 09:56:29.090573 2565 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 09:56:29.092150 kubelet[2565]: I0709 09:56:29.092119 2565 reconciler.go:26] "Reconciler: start to sync state" Jul 9 09:56:29.097575 kubelet[2565]: E0709 09:56:29.097553 2565 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 09:56:29.097800 kubelet[2565]: I0709 09:56:29.097780 2565 factory.go:223] Registration of the containerd container factory successfully Jul 9 09:56:29.097832 kubelet[2565]: I0709 09:56:29.097806 2565 factory.go:223] Registration of the systemd container factory successfully Jul 9 09:56:29.128640 kubelet[2565]: I0709 09:56:29.128590 2565 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 9 09:56:29.130598 kubelet[2565]: I0709 09:56:29.130417 2565 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 9 09:56:29.130598 kubelet[2565]: I0709 09:56:29.130441 2565 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 9 09:56:29.130598 kubelet[2565]: I0709 09:56:29.130489 2565 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 09:56:29.130598 kubelet[2565]: I0709 09:56:29.130500 2565 kubelet.go:2436] "Starting kubelet main sync loop" Jul 9 09:56:29.130598 kubelet[2565]: E0709 09:56:29.130565 2565 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 09:56:29.148186 kubelet[2565]: I0709 09:56:29.148157 2565 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 09:56:29.148186 kubelet[2565]: I0709 09:56:29.148179 2565 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 09:56:29.148351 kubelet[2565]: I0709 09:56:29.148215 2565 state_mem.go:36] "Initialized new in-memory state store" Jul 9 09:56:29.148415 kubelet[2565]: I0709 09:56:29.148398 2565 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 09:56:29.148458 kubelet[2565]: I0709 09:56:29.148424 2565 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 09:56:29.148481 kubelet[2565]: I0709 09:56:29.148460 2565 policy_none.go:49] "None policy: Start" Jul 9 09:56:29.148481 kubelet[2565]: I0709 09:56:29.148470 2565 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 09:56:29.148481 kubelet[2565]: I0709 09:56:29.148479 2565 state_mem.go:35] "Initializing new in-memory state store" Jul 9 09:56:29.148627 kubelet[2565]: I0709 09:56:29.148614 2565 state_mem.go:75] "Updated machine memory state" Jul 9 09:56:29.152739 kubelet[2565]: E0709 09:56:29.152714 2565 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 9 09:56:29.152890 kubelet[2565]: I0709 09:56:29.152879 2565 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 09:56:29.152922 kubelet[2565]: I0709 09:56:29.152891 2565 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 09:56:29.153148 kubelet[2565]: I0709 09:56:29.153080 2565 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 09:56:29.153680 kubelet[2565]: E0709 09:56:29.153655 2565 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 09:56:29.231442 kubelet[2565]: I0709 09:56:29.231356 2565 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 09:56:29.231722 kubelet[2565]: I0709 09:56:29.231463 2565 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 09:56:29.231722 kubelet[2565]: I0709 09:56:29.231513 2565 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 09:56:29.258552 kubelet[2565]: I0709 09:56:29.258515 2565 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 09:56:29.282300 kubelet[2565]: I0709 09:56:29.282270 2565 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 9 09:56:29.282436 kubelet[2565]: I0709 09:56:29.282355 2565 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 09:56:29.293179 kubelet[2565]: I0709 09:56:29.293122 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9291031337121f74cb0bf13dc25e3597-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"9291031337121f74cb0bf13dc25e3597\") " pod="kube-system/kube-apiserver-localhost" Jul 9 09:56:29.293179 kubelet[2565]: I0709 09:56:29.293162 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 09:56:29.293179 kubelet[2565]: I0709 09:56:29.293186 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 09:56:29.293364 kubelet[2565]: I0709 09:56:29.293207 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 9 09:56:29.293364 kubelet[2565]: I0709 09:56:29.293229 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9291031337121f74cb0bf13dc25e3597-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"9291031337121f74cb0bf13dc25e3597\") " pod="kube-system/kube-apiserver-localhost" Jul 9 09:56:29.293364 kubelet[2565]: I0709 09:56:29.293268 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9291031337121f74cb0bf13dc25e3597-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"9291031337121f74cb0bf13dc25e3597\") " pod="kube-system/kube-apiserver-localhost" Jul 9 09:56:29.293364 kubelet[2565]: I0709 09:56:29.293311 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 09:56:29.293364 kubelet[2565]: I0709 09:56:29.293328 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 09:56:29.293573 kubelet[2565]: I0709 09:56:29.293343 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 09:56:29.520303 sudo[2607]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 9 09:56:29.520591 sudo[2607]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 9 09:56:29.546970 kubelet[2565]: E0709 09:56:29.546845 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:29.546970 kubelet[2565]: E0709 09:56:29.546880 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:29.546970 kubelet[2565]: E0709 09:56:29.546857 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:29.970990 sudo[2607]: pam_unix(sudo:session): session closed for user root Jul 9 09:56:30.081925 kubelet[2565]: I0709 09:56:30.081871 2565 apiserver.go:52] "Watching apiserver" Jul 9 09:56:30.090047 kubelet[2565]: I0709 09:56:30.089554 2565 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 09:56:30.140051 kubelet[2565]: I0709 09:56:30.139913 2565 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 09:56:30.140154 kubelet[2565]: I0709 09:56:30.140110 2565 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 09:56:30.142642 kubelet[2565]: E0709 09:56:30.142600 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:30.150150 kubelet[2565]: E0709 09:56:30.150115 2565 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 9 09:56:30.150311 kubelet[2565]: E0709 09:56:30.150286 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:30.150427 kubelet[2565]: E0709 09:56:30.150119 2565 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 9 09:56:30.150455 kubelet[2565]: E0709 09:56:30.150425 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:30.159341 kubelet[2565]: I0709 09:56:30.159281 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.159266548 podStartE2EDuration="1.159266548s" podCreationTimestamp="2025-07-09 09:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 09:56:30.158835854 +0000 UTC m=+1.130208229" watchObservedRunningTime="2025-07-09 09:56:30.159266548 +0000 UTC m=+1.130638923" Jul 9 09:56:30.178829 kubelet[2565]: I0709 09:56:30.178710 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.178692203 podStartE2EDuration="1.178692203s" podCreationTimestamp="2025-07-09 09:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 09:56:30.167747376 +0000 UTC m=+1.139119751" watchObservedRunningTime="2025-07-09 09:56:30.178692203 +0000 UTC m=+1.150064578" Jul 9 09:56:30.188532 kubelet[2565]: I0709 09:56:30.188379 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.188362966 podStartE2EDuration="1.188362966s" podCreationTimestamp="2025-07-09 09:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 09:56:30.178972299 +0000 UTC m=+1.150344674" watchObservedRunningTime="2025-07-09 09:56:30.188362966 +0000 UTC m=+1.159735341" Jul 9 09:56:31.141693 kubelet[2565]: E0709 09:56:31.141656 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:31.141995 kubelet[2565]: E0709 09:56:31.141714 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:31.141995 kubelet[2565]: E0709 09:56:31.141789 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:31.435750 sudo[1644]: pam_unix(sudo:session): session closed for user root Jul 9 09:56:31.441861 sshd[1643]: Connection closed by 10.0.0.1 port 47394 Jul 9 09:56:31.442367 sshd-session[1640]: pam_unix(sshd:session): session closed for user core Jul 9 09:56:31.445770 systemd-logind[1443]: Session 7 logged out. Waiting for processes to exit. Jul 9 09:56:31.446066 systemd[1]: sshd@6-10.0.0.36:22-10.0.0.1:47394.service: Deactivated successfully. Jul 9 09:56:31.448346 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 09:56:31.448532 systemd[1]: session-7.scope: Consumed 7.357s CPU time, 256.2M memory peak. Jul 9 09:56:31.449648 systemd-logind[1443]: Removed session 7. Jul 9 09:56:32.147097 kubelet[2565]: E0709 09:56:32.147052 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:32.178071 kubelet[2565]: E0709 09:56:32.178031 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:33.506304 kubelet[2565]: I0709 09:56:33.506256 2565 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 09:56:33.511965 containerd[1455]: time="2025-07-09T09:56:33.511906859Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 09:56:33.512312 kubelet[2565]: I0709 09:56:33.512285 2565 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 09:56:34.013007 kubelet[2565]: E0709 09:56:34.012958 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:34.147290 kubelet[2565]: E0709 09:56:34.147169 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:34.669711 systemd[1]: Created slice kubepods-besteffort-pod99b1e727_1a94_4d1a_9e1d_13bd95a7cf66.slice - libcontainer container kubepods-besteffort-pod99b1e727_1a94_4d1a_9e1d_13bd95a7cf66.slice. Jul 9 09:56:34.702185 systemd[1]: Created slice kubepods-besteffort-pod432edea4_0afe_4eec_ab54_42712d3a7bf4.slice - libcontainer container kubepods-besteffort-pod432edea4_0afe_4eec_ab54_42712d3a7bf4.slice. Jul 9 09:56:34.723435 systemd[1]: Created slice kubepods-burstable-podf1c092ae_d373_4c35_ad03_7bc24e8ea6e5.slice - libcontainer container kubepods-burstable-podf1c092ae_d373_4c35_ad03_7bc24e8ea6e5.slice. Jul 9 09:56:34.729696 kubelet[2565]: I0709 09:56:34.729131 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cilium-cgroup\") pod \"cilium-xwz7b\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " pod="kube-system/cilium-xwz7b" Jul 9 09:56:34.729696 kubelet[2565]: I0709 09:56:34.729229 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-lib-modules\") pod \"cilium-xwz7b\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " pod="kube-system/cilium-xwz7b" Jul 9 09:56:34.729696 kubelet[2565]: I0709 09:56:34.729246 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/432edea4-0afe-4eec-ab54-42712d3a7bf4-lib-modules\") pod \"kube-proxy-kxqs8\" (UID: \"432edea4-0afe-4eec-ab54-42712d3a7bf4\") " pod="kube-system/kube-proxy-kxqs8" Jul 9 09:56:34.729696 kubelet[2565]: I0709 09:56:34.729263 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-hostproc\") pod \"cilium-xwz7b\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " pod="kube-system/cilium-xwz7b" Jul 9 09:56:34.729696 kubelet[2565]: I0709 09:56:34.729279 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-hubble-tls\") pod \"cilium-xwz7b\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " pod="kube-system/cilium-xwz7b" Jul 9 09:56:34.730101 kubelet[2565]: I0709 09:56:34.729294 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99b1e727-1a94-4d1a-9e1d-13bd95a7cf66-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-749tm\" (UID: \"99b1e727-1a94-4d1a-9e1d-13bd95a7cf66\") " pod="kube-system/cilium-operator-6c4d7847fc-749tm" Jul 9 09:56:34.730101 kubelet[2565]: I0709 09:56:34.729314 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/432edea4-0afe-4eec-ab54-42712d3a7bf4-xtables-lock\") pod \"kube-proxy-kxqs8\" (UID: \"432edea4-0afe-4eec-ab54-42712d3a7bf4\") " pod="kube-system/kube-proxy-kxqs8" Jul 9 09:56:34.730101 kubelet[2565]: I0709 09:56:34.729329 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7bj5\" (UniqueName: \"kubernetes.io/projected/432edea4-0afe-4eec-ab54-42712d3a7bf4-kube-api-access-z7bj5\") pod \"kube-proxy-kxqs8\" (UID: \"432edea4-0afe-4eec-ab54-42712d3a7bf4\") " pod="kube-system/kube-proxy-kxqs8" Jul 9 09:56:34.730101 kubelet[2565]: I0709 09:56:34.729346 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-host-proc-sys-net\") pod \"cilium-xwz7b\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " pod="kube-system/cilium-xwz7b" Jul 9 09:56:34.730101 kubelet[2565]: I0709 09:56:34.729361 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x262w\" (UniqueName: \"kubernetes.io/projected/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-kube-api-access-x262w\") pod \"cilium-xwz7b\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " pod="kube-system/cilium-xwz7b" Jul 9 09:56:34.730202 kubelet[2565]: I0709 09:56:34.729377 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/432edea4-0afe-4eec-ab54-42712d3a7bf4-kube-proxy\") pod \"kube-proxy-kxqs8\" (UID: \"432edea4-0afe-4eec-ab54-42712d3a7bf4\") " pod="kube-system/kube-proxy-kxqs8" Jul 9 09:56:34.730202 kubelet[2565]: I0709 09:56:34.729392 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cilium-config-path\") pod \"cilium-xwz7b\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " pod="kube-system/cilium-xwz7b" Jul 9 09:56:34.730202 kubelet[2565]: I0709 09:56:34.729406 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-host-proc-sys-kernel\") pod \"cilium-xwz7b\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " pod="kube-system/cilium-xwz7b" Jul 9 09:56:34.730202 kubelet[2565]: I0709 09:56:34.729420 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-xtables-lock\") pod \"cilium-xwz7b\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " pod="kube-system/cilium-xwz7b" Jul 9 09:56:34.730202 kubelet[2565]: I0709 09:56:34.729434 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cilium-run\") pod \"cilium-xwz7b\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " pod="kube-system/cilium-xwz7b" Jul 9 09:56:34.730202 kubelet[2565]: I0709 09:56:34.729449 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cni-path\") pod \"cilium-xwz7b\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " pod="kube-system/cilium-xwz7b" Jul 9 09:56:34.730321 kubelet[2565]: I0709 09:56:34.729465 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-clustermesh-secrets\") pod \"cilium-xwz7b\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " pod="kube-system/cilium-xwz7b" Jul 9 09:56:34.730321 kubelet[2565]: I0709 09:56:34.729492 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmwmv\" (UniqueName: \"kubernetes.io/projected/99b1e727-1a94-4d1a-9e1d-13bd95a7cf66-kube-api-access-cmwmv\") pod \"cilium-operator-6c4d7847fc-749tm\" (UID: \"99b1e727-1a94-4d1a-9e1d-13bd95a7cf66\") " pod="kube-system/cilium-operator-6c4d7847fc-749tm" Jul 9 09:56:34.730321 kubelet[2565]: I0709 09:56:34.729508 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-bpf-maps\") pod \"cilium-xwz7b\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " pod="kube-system/cilium-xwz7b" Jul 9 09:56:34.730321 kubelet[2565]: I0709 09:56:34.729522 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-etc-cni-netd\") pod \"cilium-xwz7b\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " pod="kube-system/cilium-xwz7b" Jul 9 09:56:34.979569 kubelet[2565]: E0709 09:56:34.979525 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:34.980271 containerd[1455]: time="2025-07-09T09:56:34.980235115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-749tm,Uid:99b1e727-1a94-4d1a-9e1d-13bd95a7cf66,Namespace:kube-system,Attempt:0,}" Jul 9 09:56:35.013875 kubelet[2565]: E0709 09:56:35.013592 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:35.015469 containerd[1455]: time="2025-07-09T09:56:35.015405269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kxqs8,Uid:432edea4-0afe-4eec-ab54-42712d3a7bf4,Namespace:kube-system,Attempt:0,}" Jul 9 09:56:35.020884 containerd[1455]: time="2025-07-09T09:56:35.020797350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 09:56:35.020884 containerd[1455]: time="2025-07-09T09:56:35.020847628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 09:56:35.021151 containerd[1455]: time="2025-07-09T09:56:35.021005108Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:35.021151 containerd[1455]: time="2025-07-09T09:56:35.021114112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:35.032411 kubelet[2565]: E0709 09:56:35.032368 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:35.034794 containerd[1455]: time="2025-07-09T09:56:35.034274530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xwz7b,Uid:f1c092ae-d373-4c35-ad03-7bc24e8ea6e5,Namespace:kube-system,Attempt:0,}" Jul 9 09:56:35.037213 containerd[1455]: time="2025-07-09T09:56:35.037134236Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 09:56:35.037622 containerd[1455]: time="2025-07-09T09:56:35.037368935Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 09:56:35.037622 containerd[1455]: time="2025-07-09T09:56:35.037393754Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:35.037826 containerd[1455]: time="2025-07-09T09:56:35.037793980Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:35.041006 systemd[1]: Started cri-containerd-2de5f7c7d858bf9a5bab4672294f126bfa9e0e6c433847e10640e6cbedd79b94.scope - libcontainer container 2de5f7c7d858bf9a5bab4672294f126bfa9e0e6c433847e10640e6cbedd79b94. Jul 9 09:56:35.059727 systemd[1]: Started cri-containerd-5ae17d884858746a9e0c4216fa1d39715cc46834d93f6a400554a4b8ad9c4e18.scope - libcontainer container 5ae17d884858746a9e0c4216fa1d39715cc46834d93f6a400554a4b8ad9c4e18. Jul 9 09:56:35.068889 containerd[1455]: time="2025-07-09T09:56:35.068728742Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 09:56:35.068889 containerd[1455]: time="2025-07-09T09:56:35.068797955Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 09:56:35.068889 containerd[1455]: time="2025-07-09T09:56:35.068813567Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:35.069739 containerd[1455]: time="2025-07-09T09:56:35.069677988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:35.087878 containerd[1455]: time="2025-07-09T09:56:35.087837147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-749tm,Uid:99b1e727-1a94-4d1a-9e1d-13bd95a7cf66,Namespace:kube-system,Attempt:0,} returns sandbox id \"2de5f7c7d858bf9a5bab4672294f126bfa9e0e6c433847e10640e6cbedd79b94\"" Jul 9 09:56:35.089268 kubelet[2565]: E0709 09:56:35.089097 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:35.089740 systemd[1]: Started cri-containerd-7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82.scope - libcontainer container 7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82. Jul 9 09:56:35.090813 containerd[1455]: time="2025-07-09T09:56:35.090777114Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 9 09:56:35.100806 containerd[1455]: time="2025-07-09T09:56:35.100750256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kxqs8,Uid:432edea4-0afe-4eec-ab54-42712d3a7bf4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ae17d884858746a9e0c4216fa1d39715cc46834d93f6a400554a4b8ad9c4e18\"" Jul 9 09:56:35.102765 kubelet[2565]: E0709 09:56:35.102727 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:35.112531 containerd[1455]: time="2025-07-09T09:56:35.112460446Z" level=info msg="CreateContainer within sandbox \"5ae17d884858746a9e0c4216fa1d39715cc46834d93f6a400554a4b8ad9c4e18\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 09:56:35.122830 containerd[1455]: time="2025-07-09T09:56:35.122716604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xwz7b,Uid:f1c092ae-d373-4c35-ad03-7bc24e8ea6e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82\"" Jul 9 09:56:35.123513 kubelet[2565]: E0709 09:56:35.123430 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:35.139936 containerd[1455]: time="2025-07-09T09:56:35.139886247Z" level=info msg="CreateContainer within sandbox \"5ae17d884858746a9e0c4216fa1d39715cc46834d93f6a400554a4b8ad9c4e18\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"58bbbfc99044f3a55c83fbff0afee730f7e1b42bba59534dfd2f231647288abb\"" Jul 9 09:56:35.141724 containerd[1455]: time="2025-07-09T09:56:35.141685542Z" level=info msg="StartContainer for \"58bbbfc99044f3a55c83fbff0afee730f7e1b42bba59534dfd2f231647288abb\"" Jul 9 09:56:35.157829 kubelet[2565]: E0709 09:56:35.157801 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:35.175734 systemd[1]: Started cri-containerd-58bbbfc99044f3a55c83fbff0afee730f7e1b42bba59534dfd2f231647288abb.scope - libcontainer container 58bbbfc99044f3a55c83fbff0afee730f7e1b42bba59534dfd2f231647288abb. Jul 9 09:56:35.209590 containerd[1455]: time="2025-07-09T09:56:35.209329721Z" level=info msg="StartContainer for \"58bbbfc99044f3a55c83fbff0afee730f7e1b42bba59534dfd2f231647288abb\" returns successfully" Jul 9 09:56:36.163169 kubelet[2565]: E0709 09:56:36.163114 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:36.202421 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3502300650.mount: Deactivated successfully. Jul 9 09:56:36.569826 containerd[1455]: time="2025-07-09T09:56:36.569775369Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:36.570774 containerd[1455]: time="2025-07-09T09:56:36.570568297Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 9 09:56:36.571450 containerd[1455]: time="2025-07-09T09:56:36.571395690Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:36.573101 containerd[1455]: time="2025-07-09T09:56:36.572987390Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.482168965s" Jul 9 09:56:36.573101 containerd[1455]: time="2025-07-09T09:56:36.573061764Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 9 09:56:36.576435 containerd[1455]: time="2025-07-09T09:56:36.576276387Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 9 09:56:36.580882 containerd[1455]: time="2025-07-09T09:56:36.580832131Z" level=info msg="CreateContainer within sandbox \"2de5f7c7d858bf9a5bab4672294f126bfa9e0e6c433847e10640e6cbedd79b94\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 9 09:56:36.632268 containerd[1455]: time="2025-07-09T09:56:36.632223556Z" level=info msg="CreateContainer within sandbox \"2de5f7c7d858bf9a5bab4672294f126bfa9e0e6c433847e10640e6cbedd79b94\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd\"" Jul 9 09:56:36.633189 containerd[1455]: time="2025-07-09T09:56:36.633145856Z" level=info msg="StartContainer for \"dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd\"" Jul 9 09:56:36.661703 systemd[1]: Started cri-containerd-dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd.scope - libcontainer container dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd. Jul 9 09:56:36.685515 containerd[1455]: time="2025-07-09T09:56:36.685421314Z" level=info msg="StartContainer for \"dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd\" returns successfully" Jul 9 09:56:37.177313 kubelet[2565]: E0709 09:56:37.177271 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:37.177813 kubelet[2565]: E0709 09:56:37.177489 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:37.210233 kubelet[2565]: I0709 09:56:37.209825 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kxqs8" podStartSLOduration=3.205127756 podStartE2EDuration="3.205127756s" podCreationTimestamp="2025-07-09 09:56:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 09:56:36.175334535 +0000 UTC m=+7.146706910" watchObservedRunningTime="2025-07-09 09:56:37.205127756 +0000 UTC m=+8.176500131" Jul 9 09:56:38.178774 kubelet[2565]: E0709 09:56:38.178736 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:42.013496 kubelet[2565]: E0709 09:56:42.013279 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:42.024240 kubelet[2565]: I0709 09:56:42.023664 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-749tm" podStartSLOduration=6.537898134 podStartE2EDuration="8.023630555s" podCreationTimestamp="2025-07-09 09:56:34 +0000 UTC" firstStartedPulling="2025-07-09 09:56:35.090339299 +0000 UTC m=+6.061711674" lastFinishedPulling="2025-07-09 09:56:36.57607172 +0000 UTC m=+7.547444095" observedRunningTime="2025-07-09 09:56:37.209984979 +0000 UTC m=+8.181357394" watchObservedRunningTime="2025-07-09 09:56:42.023630555 +0000 UTC m=+12.995002930" Jul 9 09:56:42.189131 kubelet[2565]: E0709 09:56:42.188710 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:43.183126 kubelet[2565]: E0709 09:56:43.183088 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:43.194492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2729208316.mount: Deactivated successfully. Jul 9 09:56:44.128398 update_engine[1444]: I20250709 09:56:44.128335 1444 update_attempter.cc:509] Updating boot flags... Jul 9 09:56:44.316592 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3034) Jul 9 09:56:44.331644 containerd[1455]: time="2025-07-09T09:56:44.330662979Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:44.331963 containerd[1455]: time="2025-07-09T09:56:44.331936444Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 9 09:56:44.334721 containerd[1455]: time="2025-07-09T09:56:44.334670734Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 09:56:44.335929 containerd[1455]: time="2025-07-09T09:56:44.335889695Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.759551706s" Jul 9 09:56:44.335982 containerd[1455]: time="2025-07-09T09:56:44.335927911Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 9 09:56:44.346241 containerd[1455]: time="2025-07-09T09:56:44.344371563Z" level=info msg="CreateContainer within sandbox \"7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 09:56:44.369705 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3036) Jul 9 09:56:44.396128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3907745133.mount: Deactivated successfully. Jul 9 09:56:44.408742 containerd[1455]: time="2025-07-09T09:56:44.408693273Z" level=info msg="CreateContainer within sandbox \"7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a\"" Jul 9 09:56:44.409442 containerd[1455]: time="2025-07-09T09:56:44.409232424Z" level=info msg="StartContainer for \"9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a\"" Jul 9 09:56:44.439755 systemd[1]: Started cri-containerd-9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a.scope - libcontainer container 9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a. Jul 9 09:56:44.462259 containerd[1455]: time="2025-07-09T09:56:44.460993842Z" level=info msg="StartContainer for \"9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a\" returns successfully" Jul 9 09:56:44.515507 systemd[1]: cri-containerd-9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a.scope: Deactivated successfully. Jul 9 09:56:44.516495 systemd[1]: cri-containerd-9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a.scope: Consumed 63ms CPU time, 6.7M memory peak, 3.1M written to disk. Jul 9 09:56:44.730860 containerd[1455]: time="2025-07-09T09:56:44.726136443Z" level=info msg="shim disconnected" id=9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a namespace=k8s.io Jul 9 09:56:44.730860 containerd[1455]: time="2025-07-09T09:56:44.730846578Z" level=warning msg="cleaning up after shim disconnected" id=9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a namespace=k8s.io Jul 9 09:56:44.730860 containerd[1455]: time="2025-07-09T09:56:44.730862144Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 09:56:45.187810 kubelet[2565]: E0709 09:56:45.187695 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:45.194455 containerd[1455]: time="2025-07-09T09:56:45.194289967Z" level=info msg="CreateContainer within sandbox \"7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 09:56:45.213848 containerd[1455]: time="2025-07-09T09:56:45.213800911Z" level=info msg="CreateContainer within sandbox \"7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3\"" Jul 9 09:56:45.214754 containerd[1455]: time="2025-07-09T09:56:45.214725281Z" level=info msg="StartContainer for \"de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3\"" Jul 9 09:56:45.238750 systemd[1]: Started cri-containerd-de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3.scope - libcontainer container de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3. Jul 9 09:56:45.260117 containerd[1455]: time="2025-07-09T09:56:45.259983789Z" level=info msg="StartContainer for \"de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3\" returns successfully" Jul 9 09:56:45.272752 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 09:56:45.272966 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 09:56:45.273337 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 9 09:56:45.279997 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 09:56:45.280209 systemd[1]: cri-containerd-de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3.scope: Deactivated successfully. Jul 9 09:56:45.292012 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 09:56:45.306035 containerd[1455]: time="2025-07-09T09:56:45.305953222Z" level=info msg="shim disconnected" id=de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3 namespace=k8s.io Jul 9 09:56:45.306035 containerd[1455]: time="2025-07-09T09:56:45.306012846Z" level=warning msg="cleaning up after shim disconnected" id=de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3 namespace=k8s.io Jul 9 09:56:45.306035 containerd[1455]: time="2025-07-09T09:56:45.306021529Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 09:56:45.316154 containerd[1455]: time="2025-07-09T09:56:45.316108894Z" level=warning msg="cleanup warnings time=\"2025-07-09T09:56:45Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 9 09:56:45.393035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a-rootfs.mount: Deactivated successfully. Jul 9 09:56:46.193843 kubelet[2565]: E0709 09:56:46.193799 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:46.218714 containerd[1455]: time="2025-07-09T09:56:46.218662866Z" level=info msg="CreateContainer within sandbox \"7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 09:56:46.251723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount90147220.mount: Deactivated successfully. Jul 9 09:56:46.256804 containerd[1455]: time="2025-07-09T09:56:46.256749064Z" level=info msg="CreateContainer within sandbox \"7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878\"" Jul 9 09:56:46.257615 containerd[1455]: time="2025-07-09T09:56:46.257563130Z" level=info msg="StartContainer for \"7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878\"" Jul 9 09:56:46.285728 systemd[1]: Started cri-containerd-7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878.scope - libcontainer container 7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878. Jul 9 09:56:46.312968 containerd[1455]: time="2025-07-09T09:56:46.312926743Z" level=info msg="StartContainer for \"7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878\" returns successfully" Jul 9 09:56:46.339244 systemd[1]: cri-containerd-7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878.scope: Deactivated successfully. Jul 9 09:56:46.364893 containerd[1455]: time="2025-07-09T09:56:46.364833456Z" level=info msg="shim disconnected" id=7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878 namespace=k8s.io Jul 9 09:56:46.364893 containerd[1455]: time="2025-07-09T09:56:46.364888917Z" level=warning msg="cleaning up after shim disconnected" id=7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878 namespace=k8s.io Jul 9 09:56:46.364893 containerd[1455]: time="2025-07-09T09:56:46.364898160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 09:56:46.392495 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878-rootfs.mount: Deactivated successfully. Jul 9 09:56:47.197913 kubelet[2565]: E0709 09:56:47.197868 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:47.230457 containerd[1455]: time="2025-07-09T09:56:47.230419197Z" level=info msg="CreateContainer within sandbox \"7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 09:56:47.268569 containerd[1455]: time="2025-07-09T09:56:47.268452202Z" level=info msg="CreateContainer within sandbox \"7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c\"" Jul 9 09:56:47.269344 containerd[1455]: time="2025-07-09T09:56:47.269309544Z" level=info msg="StartContainer for \"81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c\"" Jul 9 09:56:47.297750 systemd[1]: Started cri-containerd-81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c.scope - libcontainer container 81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c. Jul 9 09:56:47.321588 systemd[1]: cri-containerd-81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c.scope: Deactivated successfully. Jul 9 09:56:47.324172 containerd[1455]: time="2025-07-09T09:56:47.324124543Z" level=info msg="StartContainer for \"81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c\" returns successfully" Jul 9 09:56:47.347186 containerd[1455]: time="2025-07-09T09:56:47.347118887Z" level=info msg="shim disconnected" id=81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c namespace=k8s.io Jul 9 09:56:47.347186 containerd[1455]: time="2025-07-09T09:56:47.347176747Z" level=warning msg="cleaning up after shim disconnected" id=81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c namespace=k8s.io Jul 9 09:56:47.347186 containerd[1455]: time="2025-07-09T09:56:47.347184950Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 09:56:47.392729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c-rootfs.mount: Deactivated successfully. Jul 9 09:56:48.202398 kubelet[2565]: E0709 09:56:48.202324 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:48.231240 containerd[1455]: time="2025-07-09T09:56:48.231195112Z" level=info msg="CreateContainer within sandbox \"7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 09:56:48.262560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3903798628.mount: Deactivated successfully. Jul 9 09:56:48.263577 containerd[1455]: time="2025-07-09T09:56:48.263525275Z" level=info msg="CreateContainer within sandbox \"7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b\"" Jul 9 09:56:48.264193 containerd[1455]: time="2025-07-09T09:56:48.264169768Z" level=info msg="StartContainer for \"d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b\"" Jul 9 09:56:48.293764 systemd[1]: Started cri-containerd-d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b.scope - libcontainer container d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b. Jul 9 09:56:48.319986 containerd[1455]: time="2025-07-09T09:56:48.319935234Z" level=info msg="StartContainer for \"d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b\" returns successfully" Jul 9 09:56:48.499317 kubelet[2565]: I0709 09:56:48.499024 2565 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 09:56:48.542913 systemd[1]: Created slice kubepods-burstable-pod5124098f_b844_4042_bf2e_fadb6673b91f.slice - libcontainer container kubepods-burstable-pod5124098f_b844_4042_bf2e_fadb6673b91f.slice. Jul 9 09:56:48.550063 systemd[1]: Created slice kubepods-burstable-pod8c09d8dd_9b3b_402a_a868_8a2f362e1dbd.slice - libcontainer container kubepods-burstable-pod8c09d8dd_9b3b_402a_a868_8a2f362e1dbd.slice. Jul 9 09:56:48.624837 kubelet[2565]: I0709 09:56:48.624792 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5124098f-b844-4042-bf2e-fadb6673b91f-config-volume\") pod \"coredns-674b8bbfcf-vqb29\" (UID: \"5124098f-b844-4042-bf2e-fadb6673b91f\") " pod="kube-system/coredns-674b8bbfcf-vqb29" Jul 9 09:56:48.624837 kubelet[2565]: I0709 09:56:48.624836 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8c09d8dd-9b3b-402a-a868-8a2f362e1dbd-config-volume\") pod \"coredns-674b8bbfcf-bmvl8\" (UID: \"8c09d8dd-9b3b-402a-a868-8a2f362e1dbd\") " pod="kube-system/coredns-674b8bbfcf-bmvl8" Jul 9 09:56:48.625018 kubelet[2565]: I0709 09:56:48.624858 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxqrp\" (UniqueName: \"kubernetes.io/projected/5124098f-b844-4042-bf2e-fadb6673b91f-kube-api-access-bxqrp\") pod \"coredns-674b8bbfcf-vqb29\" (UID: \"5124098f-b844-4042-bf2e-fadb6673b91f\") " pod="kube-system/coredns-674b8bbfcf-vqb29" Jul 9 09:56:48.625018 kubelet[2565]: I0709 09:56:48.624877 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7fv9\" (UniqueName: \"kubernetes.io/projected/8c09d8dd-9b3b-402a-a868-8a2f362e1dbd-kube-api-access-b7fv9\") pod \"coredns-674b8bbfcf-bmvl8\" (UID: \"8c09d8dd-9b3b-402a-a868-8a2f362e1dbd\") " pod="kube-system/coredns-674b8bbfcf-bmvl8" Jul 9 09:56:48.847372 kubelet[2565]: E0709 09:56:48.847034 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:48.847927 containerd[1455]: time="2025-07-09T09:56:48.847822778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vqb29,Uid:5124098f-b844-4042-bf2e-fadb6673b91f,Namespace:kube-system,Attempt:0,}" Jul 9 09:56:48.860817 kubelet[2565]: E0709 09:56:48.859274 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:48.860982 containerd[1455]: time="2025-07-09T09:56:48.860182822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bmvl8,Uid:8c09d8dd-9b3b-402a-a868-8a2f362e1dbd,Namespace:kube-system,Attempt:0,}" Jul 9 09:56:49.210019 kubelet[2565]: E0709 09:56:49.209295 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:50.210732 kubelet[2565]: E0709 09:56:50.210643 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:50.592541 systemd-networkd[1392]: cilium_host: Link UP Jul 9 09:56:50.592731 systemd-networkd[1392]: cilium_net: Link UP Jul 9 09:56:50.592868 systemd-networkd[1392]: cilium_net: Gained carrier Jul 9 09:56:50.592999 systemd-networkd[1392]: cilium_host: Gained carrier Jul 9 09:56:50.680230 systemd-networkd[1392]: cilium_vxlan: Link UP Jul 9 09:56:50.680238 systemd-networkd[1392]: cilium_vxlan: Gained carrier Jul 9 09:56:50.986671 kernel: NET: Registered PF_ALG protocol family Jul 9 09:56:50.995659 systemd-networkd[1392]: cilium_net: Gained IPv6LL Jul 9 09:56:51.027728 systemd-networkd[1392]: cilium_host: Gained IPv6LL Jul 9 09:56:51.212018 kubelet[2565]: E0709 09:56:51.211987 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:51.566273 systemd-networkd[1392]: lxc_health: Link UP Jul 9 09:56:51.566501 systemd-networkd[1392]: lxc_health: Gained carrier Jul 9 09:56:52.004127 systemd-networkd[1392]: lxc2078b112bad4: Link UP Jul 9 09:56:52.008832 kernel: eth0: renamed from tmp608c0 Jul 9 09:56:52.008174 systemd-networkd[1392]: lxcbfe3b8edbc02: Link UP Jul 9 09:56:52.021347 systemd-networkd[1392]: lxc2078b112bad4: Gained carrier Jul 9 09:56:52.023652 kernel: eth0: renamed from tmp255d1 Jul 9 09:56:52.027363 systemd-networkd[1392]: lxcbfe3b8edbc02: Gained carrier Jul 9 09:56:52.531694 systemd-networkd[1392]: cilium_vxlan: Gained IPv6LL Jul 9 09:56:52.915774 systemd-networkd[1392]: lxc_health: Gained IPv6LL Jul 9 09:56:53.042962 kubelet[2565]: E0709 09:56:53.042511 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:53.062443 kubelet[2565]: I0709 09:56:53.062294 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xwz7b" podStartSLOduration=9.847697352 podStartE2EDuration="19.062277548s" podCreationTimestamp="2025-07-09 09:56:34 +0000 UTC" firstStartedPulling="2025-07-09 09:56:35.1240714 +0000 UTC m=+6.095443775" lastFinishedPulling="2025-07-09 09:56:44.338651596 +0000 UTC m=+15.310023971" observedRunningTime="2025-07-09 09:56:49.22341318 +0000 UTC m=+20.194785635" watchObservedRunningTime="2025-07-09 09:56:53.062277548 +0000 UTC m=+24.033649923" Jul 9 09:56:53.107718 systemd-networkd[1392]: lxcbfe3b8edbc02: Gained IPv6LL Jul 9 09:56:53.217665 kubelet[2565]: E0709 09:56:53.217607 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:53.875725 systemd-networkd[1392]: lxc2078b112bad4: Gained IPv6LL Jul 9 09:56:54.220978 kubelet[2565]: E0709 09:56:54.219492 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:55.555568 containerd[1455]: time="2025-07-09T09:56:55.555445214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 09:56:55.556615 containerd[1455]: time="2025-07-09T09:56:55.556439983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 09:56:55.556615 containerd[1455]: time="2025-07-09T09:56:55.556516239Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:55.557330 containerd[1455]: time="2025-07-09T09:56:55.557185500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:55.565899 containerd[1455]: time="2025-07-09T09:56:55.564416421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 09:56:55.565899 containerd[1455]: time="2025-07-09T09:56:55.564475633Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 09:56:55.565899 containerd[1455]: time="2025-07-09T09:56:55.564490996Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:55.565899 containerd[1455]: time="2025-07-09T09:56:55.564651430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:56:55.589790 systemd[1]: Started cri-containerd-608c0181ca778d3e2e87048b2c3c2561d13b1cebc094c4e16b98af6f8440c9b1.scope - libcontainer container 608c0181ca778d3e2e87048b2c3c2561d13b1cebc094c4e16b98af6f8440c9b1. Jul 9 09:56:55.597445 systemd[1]: Started cri-containerd-255d1ff0fd0a443d771cfe2bdab8c1597549706ba1b5ecde12cc88576f53327b.scope - libcontainer container 255d1ff0fd0a443d771cfe2bdab8c1597549706ba1b5ecde12cc88576f53327b. Jul 9 09:56:55.602058 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 09:56:55.612594 systemd-resolved[1325]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 09:56:55.622442 containerd[1455]: time="2025-07-09T09:56:55.622404378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-bmvl8,Uid:8c09d8dd-9b3b-402a-a868-8a2f362e1dbd,Namespace:kube-system,Attempt:0,} returns sandbox id \"608c0181ca778d3e2e87048b2c3c2561d13b1cebc094c4e16b98af6f8440c9b1\"" Jul 9 09:56:55.623235 kubelet[2565]: E0709 09:56:55.623214 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:55.635087 containerd[1455]: time="2025-07-09T09:56:55.634983104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vqb29,Uid:5124098f-b844-4042-bf2e-fadb6673b91f,Namespace:kube-system,Attempt:0,} returns sandbox id \"255d1ff0fd0a443d771cfe2bdab8c1597549706ba1b5ecde12cc88576f53327b\"" Jul 9 09:56:55.635842 kubelet[2565]: E0709 09:56:55.635818 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:55.665705 containerd[1455]: time="2025-07-09T09:56:55.665660236Z" level=info msg="CreateContainer within sandbox \"608c0181ca778d3e2e87048b2c3c2561d13b1cebc094c4e16b98af6f8440c9b1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 09:56:55.682118 containerd[1455]: time="2025-07-09T09:56:55.681888169Z" level=info msg="CreateContainer within sandbox \"255d1ff0fd0a443d771cfe2bdab8c1597549706ba1b5ecde12cc88576f53327b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 09:56:55.694760 containerd[1455]: time="2025-07-09T09:56:55.694705385Z" level=info msg="CreateContainer within sandbox \"255d1ff0fd0a443d771cfe2bdab8c1597549706ba1b5ecde12cc88576f53327b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"478c6bb06e8725630cd30804441b27c81d6b50cf7efacd46c98138834ef68efb\"" Jul 9 09:56:55.695614 containerd[1455]: time="2025-07-09T09:56:55.695269664Z" level=info msg="StartContainer for \"478c6bb06e8725630cd30804441b27c81d6b50cf7efacd46c98138834ef68efb\"" Jul 9 09:56:55.696684 containerd[1455]: time="2025-07-09T09:56:55.696593223Z" level=info msg="CreateContainer within sandbox \"608c0181ca778d3e2e87048b2c3c2561d13b1cebc094c4e16b98af6f8440c9b1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"816449ed35a685286af75ea5f352630517c483b2e0ad4dbd1b8c8784e15eb0d3\"" Jul 9 09:56:55.697007 containerd[1455]: time="2025-07-09T09:56:55.696980464Z" level=info msg="StartContainer for \"816449ed35a685286af75ea5f352630517c483b2e0ad4dbd1b8c8784e15eb0d3\"" Jul 9 09:56:55.727714 systemd[1]: Started cri-containerd-478c6bb06e8725630cd30804441b27c81d6b50cf7efacd46c98138834ef68efb.scope - libcontainer container 478c6bb06e8725630cd30804441b27c81d6b50cf7efacd46c98138834ef68efb. Jul 9 09:56:55.729449 systemd[1]: Started cri-containerd-816449ed35a685286af75ea5f352630517c483b2e0ad4dbd1b8c8784e15eb0d3.scope - libcontainer container 816449ed35a685286af75ea5f352630517c483b2e0ad4dbd1b8c8784e15eb0d3. Jul 9 09:56:55.759708 containerd[1455]: time="2025-07-09T09:56:55.759657487Z" level=info msg="StartContainer for \"816449ed35a685286af75ea5f352630517c483b2e0ad4dbd1b8c8784e15eb0d3\" returns successfully" Jul 9 09:56:55.759877 containerd[1455]: time="2025-07-09T09:56:55.759681332Z" level=info msg="StartContainer for \"478c6bb06e8725630cd30804441b27c81d6b50cf7efacd46c98138834ef68efb\" returns successfully" Jul 9 09:56:55.884956 systemd[1]: Started sshd@7-10.0.0.36:22-10.0.0.1:58770.service - OpenSSH per-connection server daemon (10.0.0.1:58770). Jul 9 09:56:55.937455 sshd[3979]: Accepted publickey for core from 10.0.0.1 port 58770 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:56:55.940351 sshd-session[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:56:55.958702 systemd-logind[1443]: New session 8 of user core. Jul 9 09:56:55.961755 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 09:56:56.116981 sshd[3986]: Connection closed by 10.0.0.1 port 58770 Jul 9 09:56:56.117649 sshd-session[3979]: pam_unix(sshd:session): session closed for user core Jul 9 09:56:56.121340 systemd-logind[1443]: Session 8 logged out. Waiting for processes to exit. Jul 9 09:56:56.121744 systemd[1]: sshd@7-10.0.0.36:22-10.0.0.1:58770.service: Deactivated successfully. Jul 9 09:56:56.123781 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 09:56:56.124829 systemd-logind[1443]: Removed session 8. Jul 9 09:56:56.235181 kubelet[2565]: E0709 09:56:56.235105 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:56.242603 kubelet[2565]: E0709 09:56:56.240566 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:56.249705 kubelet[2565]: I0709 09:56:56.249174 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-bmvl8" podStartSLOduration=22.24916066 podStartE2EDuration="22.24916066s" podCreationTimestamp="2025-07-09 09:56:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 09:56:56.248636997 +0000 UTC m=+27.220009412" watchObservedRunningTime="2025-07-09 09:56:56.24916066 +0000 UTC m=+27.220533035" Jul 9 09:56:56.274652 kubelet[2565]: I0709 09:56:56.274592 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vqb29" podStartSLOduration=22.274576192 podStartE2EDuration="22.274576192s" podCreationTimestamp="2025-07-09 09:56:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 09:56:56.259650969 +0000 UTC m=+27.231023344" watchObservedRunningTime="2025-07-09 09:56:56.274576192 +0000 UTC m=+27.245948567" Jul 9 09:56:57.242724 kubelet[2565]: E0709 09:56:57.242631 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:57.242724 kubelet[2565]: E0709 09:56:57.242674 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:58.244631 kubelet[2565]: E0709 09:56:58.244598 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:56:58.245042 kubelet[2565]: E0709 09:56:58.244951 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:01.132131 systemd[1]: Started sshd@8-10.0.0.36:22-10.0.0.1:58778.service - OpenSSH per-connection server daemon (10.0.0.1:58778). Jul 9 09:57:01.175396 sshd[4008]: Accepted publickey for core from 10.0.0.1 port 58778 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:01.176842 sshd-session[4008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:01.182389 systemd-logind[1443]: New session 9 of user core. Jul 9 09:57:01.190746 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 09:57:01.331527 sshd[4010]: Connection closed by 10.0.0.1 port 58778 Jul 9 09:57:01.331955 sshd-session[4008]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:01.335399 systemd-logind[1443]: Session 9 logged out. Waiting for processes to exit. Jul 9 09:57:01.335684 systemd[1]: sshd@8-10.0.0.36:22-10.0.0.1:58778.service: Deactivated successfully. Jul 9 09:57:01.338143 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 09:57:01.339416 systemd-logind[1443]: Removed session 9. Jul 9 09:57:06.347415 systemd[1]: Started sshd@9-10.0.0.36:22-10.0.0.1:36548.service - OpenSSH per-connection server daemon (10.0.0.1:36548). Jul 9 09:57:06.393597 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 36548 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:06.394486 sshd-session[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:06.399756 systemd-logind[1443]: New session 10 of user core. Jul 9 09:57:06.412768 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 09:57:06.525180 sshd[4030]: Connection closed by 10.0.0.1 port 36548 Jul 9 09:57:06.526863 sshd-session[4027]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:06.531342 systemd[1]: sshd@9-10.0.0.36:22-10.0.0.1:36548.service: Deactivated successfully. Jul 9 09:57:06.533188 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 09:57:06.535540 systemd-logind[1443]: Session 10 logged out. Waiting for processes to exit. Jul 9 09:57:06.538454 systemd-logind[1443]: Removed session 10. Jul 9 09:57:11.537493 systemd[1]: Started sshd@10-10.0.0.36:22-10.0.0.1:36564.service - OpenSSH per-connection server daemon (10.0.0.1:36564). Jul 9 09:57:11.585746 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 36564 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:11.587098 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:11.592609 systemd-logind[1443]: New session 11 of user core. Jul 9 09:57:11.608787 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 09:57:11.737053 sshd[4046]: Connection closed by 10.0.0.1 port 36564 Jul 9 09:57:11.737780 sshd-session[4044]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:11.751420 systemd[1]: sshd@10-10.0.0.36:22-10.0.0.1:36564.service: Deactivated successfully. Jul 9 09:57:11.755748 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 09:57:11.758024 systemd-logind[1443]: Session 11 logged out. Waiting for processes to exit. Jul 9 09:57:11.771371 systemd[1]: Started sshd@11-10.0.0.36:22-10.0.0.1:36568.service - OpenSSH per-connection server daemon (10.0.0.1:36568). Jul 9 09:57:11.773049 systemd-logind[1443]: Removed session 11. Jul 9 09:57:11.828714 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 36568 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:11.829982 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:11.835141 systemd-logind[1443]: New session 12 of user core. Jul 9 09:57:11.842740 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 09:57:11.996085 sshd[4062]: Connection closed by 10.0.0.1 port 36568 Jul 9 09:57:11.996781 sshd-session[4059]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:12.005838 systemd[1]: sshd@11-10.0.0.36:22-10.0.0.1:36568.service: Deactivated successfully. Jul 9 09:57:12.010648 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 09:57:12.012253 systemd-logind[1443]: Session 12 logged out. Waiting for processes to exit. Jul 9 09:57:12.024411 systemd[1]: Started sshd@12-10.0.0.36:22-10.0.0.1:36572.service - OpenSSH per-connection server daemon (10.0.0.1:36572). Jul 9 09:57:12.027372 systemd-logind[1443]: Removed session 12. Jul 9 09:57:12.069642 sshd[4072]: Accepted publickey for core from 10.0.0.1 port 36572 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:12.070910 sshd-session[4072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:12.075621 systemd-logind[1443]: New session 13 of user core. Jul 9 09:57:12.085746 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 09:57:12.205105 sshd[4075]: Connection closed by 10.0.0.1 port 36572 Jul 9 09:57:12.205458 sshd-session[4072]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:12.209057 systemd[1]: sshd@12-10.0.0.36:22-10.0.0.1:36572.service: Deactivated successfully. Jul 9 09:57:12.212311 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 09:57:12.213107 systemd-logind[1443]: Session 13 logged out. Waiting for processes to exit. Jul 9 09:57:12.214239 systemd-logind[1443]: Removed session 13. Jul 9 09:57:17.218643 systemd[1]: Started sshd@13-10.0.0.36:22-10.0.0.1:58400.service - OpenSSH per-connection server daemon (10.0.0.1:58400). Jul 9 09:57:17.263328 sshd[4089]: Accepted publickey for core from 10.0.0.1 port 58400 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:17.264728 sshd-session[4089]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:17.268955 systemd-logind[1443]: New session 14 of user core. Jul 9 09:57:17.278720 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 09:57:17.394022 sshd[4091]: Connection closed by 10.0.0.1 port 58400 Jul 9 09:57:17.394389 sshd-session[4089]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:17.397861 systemd[1]: sshd@13-10.0.0.36:22-10.0.0.1:58400.service: Deactivated successfully. Jul 9 09:57:17.399612 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 09:57:17.402017 systemd-logind[1443]: Session 14 logged out. Waiting for processes to exit. Jul 9 09:57:17.402867 systemd-logind[1443]: Removed session 14. Jul 9 09:57:22.405998 systemd[1]: Started sshd@14-10.0.0.36:22-10.0.0.1:58416.service - OpenSSH per-connection server daemon (10.0.0.1:58416). Jul 9 09:57:22.451612 sshd[4105]: Accepted publickey for core from 10.0.0.1 port 58416 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:22.453452 sshd-session[4105]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:22.457635 systemd-logind[1443]: New session 15 of user core. Jul 9 09:57:22.470743 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 09:57:22.581237 sshd[4107]: Connection closed by 10.0.0.1 port 58416 Jul 9 09:57:22.581719 sshd-session[4105]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:22.595197 systemd[1]: sshd@14-10.0.0.36:22-10.0.0.1:58416.service: Deactivated successfully. Jul 9 09:57:22.597902 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 09:57:22.599153 systemd-logind[1443]: Session 15 logged out. Waiting for processes to exit. Jul 9 09:57:22.608019 systemd[1]: Started sshd@15-10.0.0.36:22-10.0.0.1:49078.service - OpenSSH per-connection server daemon (10.0.0.1:49078). Jul 9 09:57:22.609069 systemd-logind[1443]: Removed session 15. Jul 9 09:57:22.650747 sshd[4120]: Accepted publickey for core from 10.0.0.1 port 49078 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:22.651971 sshd-session[4120]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:22.656011 systemd-logind[1443]: New session 16 of user core. Jul 9 09:57:22.665704 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 09:57:22.870078 sshd[4123]: Connection closed by 10.0.0.1 port 49078 Jul 9 09:57:22.870633 sshd-session[4120]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:22.884756 systemd[1]: sshd@15-10.0.0.36:22-10.0.0.1:49078.service: Deactivated successfully. Jul 9 09:57:22.886329 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 09:57:22.887007 systemd-logind[1443]: Session 16 logged out. Waiting for processes to exit. Jul 9 09:57:22.896133 systemd[1]: Started sshd@16-10.0.0.36:22-10.0.0.1:49080.service - OpenSSH per-connection server daemon (10.0.0.1:49080). Jul 9 09:57:22.897425 systemd-logind[1443]: Removed session 16. Jul 9 09:57:22.948841 sshd[4134]: Accepted publickey for core from 10.0.0.1 port 49080 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:22.950307 sshd-session[4134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:22.956380 systemd-logind[1443]: New session 17 of user core. Jul 9 09:57:22.963701 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 09:57:23.927131 sshd[4137]: Connection closed by 10.0.0.1 port 49080 Jul 9 09:57:23.928999 sshd-session[4134]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:23.938027 systemd[1]: sshd@16-10.0.0.36:22-10.0.0.1:49080.service: Deactivated successfully. Jul 9 09:57:23.939697 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 09:57:23.942383 systemd-logind[1443]: Session 17 logged out. Waiting for processes to exit. Jul 9 09:57:23.949515 systemd[1]: Started sshd@17-10.0.0.36:22-10.0.0.1:49084.service - OpenSSH per-connection server daemon (10.0.0.1:49084). Jul 9 09:57:23.951896 systemd-logind[1443]: Removed session 17. Jul 9 09:57:23.990980 sshd[4158]: Accepted publickey for core from 10.0.0.1 port 49084 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:23.992202 sshd-session[4158]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:23.996372 systemd-logind[1443]: New session 18 of user core. Jul 9 09:57:24.004795 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 09:57:24.230293 sshd[4161]: Connection closed by 10.0.0.1 port 49084 Jul 9 09:57:24.232731 sshd-session[4158]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:24.240025 systemd[1]: sshd@17-10.0.0.36:22-10.0.0.1:49084.service: Deactivated successfully. Jul 9 09:57:24.242927 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 09:57:24.243934 systemd-logind[1443]: Session 18 logged out. Waiting for processes to exit. Jul 9 09:57:24.251418 systemd[1]: Started sshd@18-10.0.0.36:22-10.0.0.1:49096.service - OpenSSH per-connection server daemon (10.0.0.1:49096). Jul 9 09:57:24.253143 systemd-logind[1443]: Removed session 18. Jul 9 09:57:24.290216 sshd[4171]: Accepted publickey for core from 10.0.0.1 port 49096 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:24.291601 sshd-session[4171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:24.296215 systemd-logind[1443]: New session 19 of user core. Jul 9 09:57:24.305789 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 09:57:24.414583 sshd[4174]: Connection closed by 10.0.0.1 port 49096 Jul 9 09:57:24.414717 sshd-session[4171]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:24.418189 systemd[1]: sshd@18-10.0.0.36:22-10.0.0.1:49096.service: Deactivated successfully. Jul 9 09:57:24.420206 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 09:57:24.421027 systemd-logind[1443]: Session 19 logged out. Waiting for processes to exit. Jul 9 09:57:24.421922 systemd-logind[1443]: Removed session 19. Jul 9 09:57:29.426952 systemd[1]: Started sshd@19-10.0.0.36:22-10.0.0.1:49100.service - OpenSSH per-connection server daemon (10.0.0.1:49100). Jul 9 09:57:29.469651 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 49100 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:29.470939 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:29.474802 systemd-logind[1443]: New session 20 of user core. Jul 9 09:57:29.490734 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 09:57:29.596575 sshd[4195]: Connection closed by 10.0.0.1 port 49100 Jul 9 09:57:29.596671 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:29.599199 systemd[1]: sshd@19-10.0.0.36:22-10.0.0.1:49100.service: Deactivated successfully. Jul 9 09:57:29.600928 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 09:57:29.602169 systemd-logind[1443]: Session 20 logged out. Waiting for processes to exit. Jul 9 09:57:29.603106 systemd-logind[1443]: Removed session 20. Jul 9 09:57:34.611216 systemd[1]: Started sshd@20-10.0.0.36:22-10.0.0.1:50928.service - OpenSSH per-connection server daemon (10.0.0.1:50928). Jul 9 09:57:34.661596 sshd[4209]: Accepted publickey for core from 10.0.0.1 port 50928 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:34.662977 sshd-session[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:34.667627 systemd-logind[1443]: New session 21 of user core. Jul 9 09:57:34.673764 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 09:57:34.787284 sshd[4211]: Connection closed by 10.0.0.1 port 50928 Jul 9 09:57:34.787754 sshd-session[4209]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:34.791609 systemd-logind[1443]: Session 21 logged out. Waiting for processes to exit. Jul 9 09:57:34.791817 systemd[1]: sshd@20-10.0.0.36:22-10.0.0.1:50928.service: Deactivated successfully. Jul 9 09:57:34.793522 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 09:57:34.795359 systemd-logind[1443]: Removed session 21. Jul 9 09:57:39.798910 systemd[1]: Started sshd@21-10.0.0.36:22-10.0.0.1:50944.service - OpenSSH per-connection server daemon (10.0.0.1:50944). Jul 9 09:57:39.846241 sshd[4230]: Accepted publickey for core from 10.0.0.1 port 50944 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:39.847532 sshd-session[4230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:39.852248 systemd-logind[1443]: New session 22 of user core. Jul 9 09:57:39.858781 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 09:57:39.988467 sshd[4232]: Connection closed by 10.0.0.1 port 50944 Jul 9 09:57:39.987923 sshd-session[4230]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:40.009179 systemd[1]: sshd@21-10.0.0.36:22-10.0.0.1:50944.service: Deactivated successfully. Jul 9 09:57:40.011030 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 09:57:40.013894 systemd-logind[1443]: Session 22 logged out. Waiting for processes to exit. Jul 9 09:57:40.029870 systemd[1]: Started sshd@22-10.0.0.36:22-10.0.0.1:50952.service - OpenSSH per-connection server daemon (10.0.0.1:50952). Jul 9 09:57:40.031004 systemd-logind[1443]: Removed session 22. Jul 9 09:57:40.069998 sshd[4244]: Accepted publickey for core from 10.0.0.1 port 50952 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:40.071278 sshd-session[4244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:40.076113 systemd-logind[1443]: New session 23 of user core. Jul 9 09:57:40.086722 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 09:57:42.643879 containerd[1455]: time="2025-07-09T09:57:42.643836885Z" level=info msg="StopContainer for \"dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd\" with timeout 30 (s)" Jul 9 09:57:42.646797 containerd[1455]: time="2025-07-09T09:57:42.644882479Z" level=info msg="Stop container \"dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd\" with signal terminated" Jul 9 09:57:42.665761 systemd[1]: cri-containerd-dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd.scope: Deactivated successfully. Jul 9 09:57:42.688113 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd-rootfs.mount: Deactivated successfully. Jul 9 09:57:42.693666 containerd[1455]: time="2025-07-09T09:57:42.693610175Z" level=info msg="shim disconnected" id=dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd namespace=k8s.io Jul 9 09:57:42.693666 containerd[1455]: time="2025-07-09T09:57:42.693664457Z" level=warning msg="cleaning up after shim disconnected" id=dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd namespace=k8s.io Jul 9 09:57:42.693812 containerd[1455]: time="2025-07-09T09:57:42.693675417Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 09:57:42.726932 containerd[1455]: time="2025-07-09T09:57:42.726434357Z" level=info msg="StopContainer for \"d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b\" with timeout 2 (s)" Jul 9 09:57:42.727798 containerd[1455]: time="2025-07-09T09:57:42.727741599Z" level=info msg="Stop container \"d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b\" with signal terminated" Jul 9 09:57:42.735875 systemd-networkd[1392]: lxc_health: Link DOWN Jul 9 09:57:42.736027 systemd-networkd[1392]: lxc_health: Lost carrier Jul 9 09:57:42.750779 containerd[1455]: time="2025-07-09T09:57:42.750707302Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 09:57:42.757438 systemd[1]: cri-containerd-d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b.scope: Deactivated successfully. Jul 9 09:57:42.757814 systemd[1]: cri-containerd-d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b.scope: Consumed 6.593s CPU time, 124M memory peak, 136K read from disk, 12.9M written to disk. Jul 9 09:57:42.778426 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b-rootfs.mount: Deactivated successfully. Jul 9 09:57:42.779176 containerd[1455]: time="2025-07-09T09:57:42.779124181Z" level=info msg="StopContainer for \"dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd\" returns successfully" Jul 9 09:57:42.779954 containerd[1455]: time="2025-07-09T09:57:42.779774882Z" level=info msg="StopPodSandbox for \"2de5f7c7d858bf9a5bab4672294f126bfa9e0e6c433847e10640e6cbedd79b94\"" Jul 9 09:57:42.788517 containerd[1455]: time="2025-07-09T09:57:42.787392409Z" level=info msg="shim disconnected" id=d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b namespace=k8s.io Jul 9 09:57:42.788517 containerd[1455]: time="2025-07-09T09:57:42.788501205Z" level=warning msg="cleaning up after shim disconnected" id=d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b namespace=k8s.io Jul 9 09:57:42.788517 containerd[1455]: time="2025-07-09T09:57:42.788519125Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 09:57:42.788893 containerd[1455]: time="2025-07-09T09:57:42.787970547Z" level=info msg="Container to stop \"dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 09:57:42.790770 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2de5f7c7d858bf9a5bab4672294f126bfa9e0e6c433847e10640e6cbedd79b94-shm.mount: Deactivated successfully. Jul 9 09:57:42.797953 systemd[1]: cri-containerd-2de5f7c7d858bf9a5bab4672294f126bfa9e0e6c433847e10640e6cbedd79b94.scope: Deactivated successfully. Jul 9 09:57:42.808915 containerd[1455]: time="2025-07-09T09:57:42.808868143Z" level=info msg="StopContainer for \"d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b\" returns successfully" Jul 9 09:57:42.809590 containerd[1455]: time="2025-07-09T09:57:42.809441642Z" level=info msg="StopPodSandbox for \"7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82\"" Jul 9 09:57:42.809590 containerd[1455]: time="2025-07-09T09:57:42.809535525Z" level=info msg="Container to stop \"9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 09:57:42.809741 containerd[1455]: time="2025-07-09T09:57:42.809591687Z" level=info msg="Container to stop \"7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 09:57:42.809741 containerd[1455]: time="2025-07-09T09:57:42.809621208Z" level=info msg="Container to stop \"de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 09:57:42.809741 containerd[1455]: time="2025-07-09T09:57:42.809633048Z" level=info msg="Container to stop \"81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 09:57:42.809741 containerd[1455]: time="2025-07-09T09:57:42.809641648Z" level=info msg="Container to stop \"d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 09:57:42.811303 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82-shm.mount: Deactivated successfully. Jul 9 09:57:42.818788 systemd[1]: cri-containerd-7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82.scope: Deactivated successfully. Jul 9 09:57:42.840318 containerd[1455]: time="2025-07-09T09:57:42.840253519Z" level=info msg="shim disconnected" id=2de5f7c7d858bf9a5bab4672294f126bfa9e0e6c433847e10640e6cbedd79b94 namespace=k8s.io Jul 9 09:57:42.840318 containerd[1455]: time="2025-07-09T09:57:42.840312600Z" level=warning msg="cleaning up after shim disconnected" id=2de5f7c7d858bf9a5bab4672294f126bfa9e0e6c433847e10640e6cbedd79b94 namespace=k8s.io Jul 9 09:57:42.840318 containerd[1455]: time="2025-07-09T09:57:42.840320561Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 09:57:42.841656 containerd[1455]: time="2025-07-09T09:57:42.840260119Z" level=info msg="shim disconnected" id=7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82 namespace=k8s.io Jul 9 09:57:42.841656 containerd[1455]: time="2025-07-09T09:57:42.840628651Z" level=warning msg="cleaning up after shim disconnected" id=7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82 namespace=k8s.io Jul 9 09:57:42.841656 containerd[1455]: time="2025-07-09T09:57:42.840637211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 09:57:42.854979 containerd[1455]: time="2025-07-09T09:57:42.854936193Z" level=info msg="TearDown network for sandbox \"7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82\" successfully" Jul 9 09:57:42.854979 containerd[1455]: time="2025-07-09T09:57:42.854969715Z" level=info msg="StopPodSandbox for \"7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82\" returns successfully" Jul 9 09:57:42.872533 containerd[1455]: time="2025-07-09T09:57:42.872488281Z" level=info msg="TearDown network for sandbox \"2de5f7c7d858bf9a5bab4672294f126bfa9e0e6c433847e10640e6cbedd79b94\" successfully" Jul 9 09:57:42.872533 containerd[1455]: time="2025-07-09T09:57:42.872523362Z" level=info msg="StopPodSandbox for \"2de5f7c7d858bf9a5bab4672294f126bfa9e0e6c433847e10640e6cbedd79b94\" returns successfully" Jul 9 09:57:42.948542 kubelet[2565]: I0709 09:57:42.948140 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cilium-cgroup\") pod \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " Jul 9 09:57:42.948542 kubelet[2565]: I0709 09:57:42.948184 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-hostproc\") pod \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " Jul 9 09:57:42.948542 kubelet[2565]: I0709 09:57:42.948212 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cmwmv\" (UniqueName: \"kubernetes.io/projected/99b1e727-1a94-4d1a-9e1d-13bd95a7cf66-kube-api-access-cmwmv\") pod \"99b1e727-1a94-4d1a-9e1d-13bd95a7cf66\" (UID: \"99b1e727-1a94-4d1a-9e1d-13bd95a7cf66\") " Jul 9 09:57:42.948542 kubelet[2565]: I0709 09:57:42.948232 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x262w\" (UniqueName: \"kubernetes.io/projected/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-kube-api-access-x262w\") pod \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " Jul 9 09:57:42.948542 kubelet[2565]: I0709 09:57:42.948246 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-xtables-lock\") pod \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " Jul 9 09:57:42.948542 kubelet[2565]: I0709 09:57:42.948264 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cilium-run\") pod \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " Jul 9 09:57:42.949033 kubelet[2565]: I0709 09:57:42.948278 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cni-path\") pod \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " Jul 9 09:57:42.949033 kubelet[2565]: I0709 09:57:42.948295 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-clustermesh-secrets\") pod \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " Jul 9 09:57:42.949033 kubelet[2565]: I0709 09:57:42.948311 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-etc-cni-netd\") pod \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " Jul 9 09:57:42.949033 kubelet[2565]: I0709 09:57:42.948331 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99b1e727-1a94-4d1a-9e1d-13bd95a7cf66-cilium-config-path\") pod \"99b1e727-1a94-4d1a-9e1d-13bd95a7cf66\" (UID: \"99b1e727-1a94-4d1a-9e1d-13bd95a7cf66\") " Jul 9 09:57:42.949033 kubelet[2565]: I0709 09:57:42.948346 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-host-proc-sys-kernel\") pod \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " Jul 9 09:57:42.949033 kubelet[2565]: I0709 09:57:42.948360 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-bpf-maps\") pod \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " Jul 9 09:57:42.949177 kubelet[2565]: I0709 09:57:42.948375 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-hubble-tls\") pod \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " Jul 9 09:57:42.949177 kubelet[2565]: I0709 09:57:42.948391 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-lib-modules\") pod \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " Jul 9 09:57:42.949177 kubelet[2565]: I0709 09:57:42.948406 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-host-proc-sys-net\") pod \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " Jul 9 09:57:42.949177 kubelet[2565]: I0709 09:57:42.948422 2565 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cilium-config-path\") pod \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\" (UID: \"f1c092ae-d373-4c35-ad03-7bc24e8ea6e5\") " Jul 9 09:57:42.953272 kubelet[2565]: I0709 09:57:42.953181 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5" (UID: "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 09:57:42.953364 kubelet[2565]: I0709 09:57:42.953279 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5" (UID: "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 09:57:42.953364 kubelet[2565]: I0709 09:57:42.954460 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5" (UID: "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 09:57:42.953364 kubelet[2565]: I0709 09:57:42.954507 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-hostproc" (OuterVolumeSpecName: "hostproc") pod "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5" (UID: "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 09:57:42.958005 kubelet[2565]: I0709 09:57:42.957959 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5" (UID: "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 09:57:42.958126 kubelet[2565]: I0709 09:57:42.958012 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5" (UID: "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 09:57:42.958126 kubelet[2565]: I0709 09:57:42.958048 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cni-path" (OuterVolumeSpecName: "cni-path") pod "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5" (UID: "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 09:57:42.958466 kubelet[2565]: I0709 09:57:42.958426 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99b1e727-1a94-4d1a-9e1d-13bd95a7cf66-kube-api-access-cmwmv" (OuterVolumeSpecName: "kube-api-access-cmwmv") pod "99b1e727-1a94-4d1a-9e1d-13bd95a7cf66" (UID: "99b1e727-1a94-4d1a-9e1d-13bd95a7cf66"). InnerVolumeSpecName "kube-api-access-cmwmv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 09:57:42.958860 kubelet[2565]: I0709 09:57:42.958835 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5" (UID: "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 09:57:42.958922 kubelet[2565]: I0709 09:57:42.958911 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5" (UID: "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 9 09:57:42.958958 kubelet[2565]: I0709 09:57:42.958939 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5" (UID: "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 09:57:42.958958 kubelet[2565]: I0709 09:57:42.958955 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5" (UID: "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 09:57:42.959003 kubelet[2565]: I0709 09:57:42.958970 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5" (UID: "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 09:57:42.960396 kubelet[2565]: I0709 09:57:42.960358 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-kube-api-access-x262w" (OuterVolumeSpecName: "kube-api-access-x262w") pod "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5" (UID: "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5"). InnerVolumeSpecName "kube-api-access-x262w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 09:57:42.960615 kubelet[2565]: I0709 09:57:42.960593 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5" (UID: "f1c092ae-d373-4c35-ad03-7bc24e8ea6e5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 09:57:42.965406 kubelet[2565]: I0709 09:57:42.965374 2565 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99b1e727-1a94-4d1a-9e1d-13bd95a7cf66-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "99b1e727-1a94-4d1a-9e1d-13bd95a7cf66" (UID: "99b1e727-1a94-4d1a-9e1d-13bd95a7cf66"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 09:57:43.048968 kubelet[2565]: I0709 09:57:43.048926 2565 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.049148 kubelet[2565]: I0709 09:57:43.049132 2565 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.049211 kubelet[2565]: I0709 09:57:43.049200 2565 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.049262 kubelet[2565]: I0709 09:57:43.049251 2565 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.049316 kubelet[2565]: I0709 09:57:43.049308 2565 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.049371 kubelet[2565]: I0709 09:57:43.049361 2565 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.049417 kubelet[2565]: I0709 09:57:43.049409 2565 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.049476 kubelet[2565]: I0709 09:57:43.049465 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cmwmv\" (UniqueName: \"kubernetes.io/projected/99b1e727-1a94-4d1a-9e1d-13bd95a7cf66-kube-api-access-cmwmv\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.049528 kubelet[2565]: I0709 09:57:43.049520 2565 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x262w\" (UniqueName: \"kubernetes.io/projected/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-kube-api-access-x262w\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.049628 kubelet[2565]: I0709 09:57:43.049616 2565 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.049681 kubelet[2565]: I0709 09:57:43.049672 2565 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.049733 kubelet[2565]: I0709 09:57:43.049724 2565 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.049786 kubelet[2565]: I0709 09:57:43.049777 2565 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.049843 kubelet[2565]: I0709 09:57:43.049833 2565 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.049894 kubelet[2565]: I0709 09:57:43.049886 2565 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/99b1e727-1a94-4d1a-9e1d-13bd95a7cf66-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.049957 kubelet[2565]: I0709 09:57:43.049948 2565 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 9 09:57:43.139290 systemd[1]: Removed slice kubepods-burstable-podf1c092ae_d373_4c35_ad03_7bc24e8ea6e5.slice - libcontainer container kubepods-burstable-podf1c092ae_d373_4c35_ad03_7bc24e8ea6e5.slice. Jul 9 09:57:43.139479 systemd[1]: kubepods-burstable-podf1c092ae_d373_4c35_ad03_7bc24e8ea6e5.slice: Consumed 6.748s CPU time, 124.3M memory peak, 148K read from disk, 16.1M written to disk. Jul 9 09:57:43.141330 systemd[1]: Removed slice kubepods-besteffort-pod99b1e727_1a94_4d1a_9e1d_13bd95a7cf66.slice - libcontainer container kubepods-besteffort-pod99b1e727_1a94_4d1a_9e1d_13bd95a7cf66.slice. Jul 9 09:57:43.346121 kubelet[2565]: I0709 09:57:43.346019 2565 scope.go:117] "RemoveContainer" containerID="d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b" Jul 9 09:57:43.349204 containerd[1455]: time="2025-07-09T09:57:43.348723920Z" level=info msg="RemoveContainer for \"d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b\"" Jul 9 09:57:43.351392 containerd[1455]: time="2025-07-09T09:57:43.351363644Z" level=info msg="RemoveContainer for \"d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b\" returns successfully" Jul 9 09:57:43.351781 kubelet[2565]: I0709 09:57:43.351630 2565 scope.go:117] "RemoveContainer" containerID="81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c" Jul 9 09:57:43.352600 containerd[1455]: time="2025-07-09T09:57:43.352570562Z" level=info msg="RemoveContainer for \"81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c\"" Jul 9 09:57:43.356165 containerd[1455]: time="2025-07-09T09:57:43.356080753Z" level=info msg="RemoveContainer for \"81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c\" returns successfully" Jul 9 09:57:43.356498 kubelet[2565]: I0709 09:57:43.356300 2565 scope.go:117] "RemoveContainer" containerID="7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878" Jul 9 09:57:43.358063 containerd[1455]: time="2025-07-09T09:57:43.358025974Z" level=info msg="RemoveContainer for \"7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878\"" Jul 9 09:57:43.362406 containerd[1455]: time="2025-07-09T09:57:43.362377512Z" level=info msg="RemoveContainer for \"7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878\" returns successfully" Jul 9 09:57:43.362697 kubelet[2565]: I0709 09:57:43.362669 2565 scope.go:117] "RemoveContainer" containerID="de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3" Jul 9 09:57:43.365161 containerd[1455]: time="2025-07-09T09:57:43.364359495Z" level=info msg="RemoveContainer for \"de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3\"" Jul 9 09:57:43.366976 containerd[1455]: time="2025-07-09T09:57:43.366944697Z" level=info msg="RemoveContainer for \"de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3\" returns successfully" Jul 9 09:57:43.367587 kubelet[2565]: I0709 09:57:43.367156 2565 scope.go:117] "RemoveContainer" containerID="9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a" Jul 9 09:57:43.369448 containerd[1455]: time="2025-07-09T09:57:43.369415575Z" level=info msg="RemoveContainer for \"9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a\"" Jul 9 09:57:43.371808 containerd[1455]: time="2025-07-09T09:57:43.371720728Z" level=info msg="RemoveContainer for \"9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a\" returns successfully" Jul 9 09:57:43.371908 kubelet[2565]: I0709 09:57:43.371869 2565 scope.go:117] "RemoveContainer" containerID="d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b" Jul 9 09:57:43.372105 containerd[1455]: time="2025-07-09T09:57:43.372066819Z" level=error msg="ContainerStatus for \"d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b\": not found" Jul 9 09:57:43.381190 kubelet[2565]: E0709 09:57:43.381147 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b\": not found" containerID="d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b" Jul 9 09:57:43.381268 kubelet[2565]: I0709 09:57:43.381191 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b"} err="failed to get container status \"d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b\": rpc error: code = NotFound desc = an error occurred when try to find container \"d5e8542a62fe23d19ddee55c757f4f1296c4af8ea7b787c37b55534eb068dc2b\": not found" Jul 9 09:57:43.381268 kubelet[2565]: I0709 09:57:43.381231 2565 scope.go:117] "RemoveContainer" containerID="81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c" Jul 9 09:57:43.381963 containerd[1455]: time="2025-07-09T09:57:43.381486197Z" level=error msg="ContainerStatus for \"81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c\": not found" Jul 9 09:57:43.381963 containerd[1455]: time="2025-07-09T09:57:43.381871529Z" level=error msg="ContainerStatus for \"7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878\": not found" Jul 9 09:57:43.382119 kubelet[2565]: E0709 09:57:43.381656 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c\": not found" containerID="81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c" Jul 9 09:57:43.382119 kubelet[2565]: I0709 09:57:43.381683 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c"} err="failed to get container status \"81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c\": rpc error: code = NotFound desc = an error occurred when try to find container \"81809f4c4996ba9fd1edb12cec8751d0b84a33e41ea519e440f62e64ece15a8c\": not found" Jul 9 09:57:43.382119 kubelet[2565]: I0709 09:57:43.381699 2565 scope.go:117] "RemoveContainer" containerID="7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878" Jul 9 09:57:43.382119 kubelet[2565]: E0709 09:57:43.381983 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878\": not found" containerID="7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878" Jul 9 09:57:43.382119 kubelet[2565]: I0709 09:57:43.382007 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878"} err="failed to get container status \"7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878\": rpc error: code = NotFound desc = an error occurred when try to find container \"7e729a18042094ed0bae671780a15c983f5a6df5f154e162e319bd8ff853d878\": not found" Jul 9 09:57:43.382119 kubelet[2565]: I0709 09:57:43.382020 2565 scope.go:117] "RemoveContainer" containerID="de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3" Jul 9 09:57:43.382246 containerd[1455]: time="2025-07-09T09:57:43.382183739Z" level=error msg="ContainerStatus for \"de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3\": not found" Jul 9 09:57:43.382985 kubelet[2565]: E0709 09:57:43.382956 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3\": not found" containerID="de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3" Jul 9 09:57:43.383058 kubelet[2565]: I0709 09:57:43.382986 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3"} err="failed to get container status \"de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3\": rpc error: code = NotFound desc = an error occurred when try to find container \"de662d5a2140854f76b366939a2b5fd93cf455003915ba811f9d2bdafba583d3\": not found" Jul 9 09:57:43.383058 kubelet[2565]: I0709 09:57:43.383002 2565 scope.go:117] "RemoveContainer" containerID="9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a" Jul 9 09:57:43.383223 containerd[1455]: time="2025-07-09T09:57:43.383171410Z" level=error msg="ContainerStatus for \"9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a\": not found" Jul 9 09:57:43.383303 kubelet[2565]: E0709 09:57:43.383273 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a\": not found" containerID="9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a" Jul 9 09:57:43.383303 kubelet[2565]: I0709 09:57:43.383291 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a"} err="failed to get container status \"9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a\": rpc error: code = NotFound desc = an error occurred when try to find container \"9fb3c24921f566720dccf27a45154df1acd48e761ea70861da479b5f0408ac3a\": not found" Jul 9 09:57:43.383303 kubelet[2565]: I0709 09:57:43.383303 2565 scope.go:117] "RemoveContainer" containerID="dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd" Jul 9 09:57:43.384730 containerd[1455]: time="2025-07-09T09:57:43.384451050Z" level=info msg="RemoveContainer for \"dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd\"" Jul 9 09:57:43.387328 containerd[1455]: time="2025-07-09T09:57:43.387212818Z" level=info msg="RemoveContainer for \"dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd\" returns successfully" Jul 9 09:57:43.387455 kubelet[2565]: I0709 09:57:43.387430 2565 scope.go:117] "RemoveContainer" containerID="dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd" Jul 9 09:57:43.387743 containerd[1455]: time="2025-07-09T09:57:43.387681113Z" level=error msg="ContainerStatus for \"dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd\": not found" Jul 9 09:57:43.387898 kubelet[2565]: E0709 09:57:43.387876 2565 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd\": not found" containerID="dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd" Jul 9 09:57:43.387936 kubelet[2565]: I0709 09:57:43.387906 2565 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd"} err="failed to get container status \"dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"dba4060d75688575a26e5b7fe32cdebedbca7a6884838f3135f2c7b9d65157cd\": not found" Jul 9 09:57:43.682577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7dca3eab8945c6c29df8f85a7db48e3e1aea7ab24fbfb45d3fbea2d559d59c82-rootfs.mount: Deactivated successfully. Jul 9 09:57:43.682702 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2de5f7c7d858bf9a5bab4672294f126bfa9e0e6c433847e10640e6cbedd79b94-rootfs.mount: Deactivated successfully. Jul 9 09:57:43.682755 systemd[1]: var-lib-kubelet-pods-99b1e727\x2d1a94\x2d4d1a\x2d9e1d\x2d13bd95a7cf66-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcmwmv.mount: Deactivated successfully. Jul 9 09:57:43.682819 systemd[1]: var-lib-kubelet-pods-f1c092ae\x2dd373\x2d4c35\x2dad03\x2d7bc24e8ea6e5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx262w.mount: Deactivated successfully. Jul 9 09:57:43.682870 systemd[1]: var-lib-kubelet-pods-f1c092ae\x2dd373\x2d4c35\x2dad03\x2d7bc24e8ea6e5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 9 09:57:43.682920 systemd[1]: var-lib-kubelet-pods-f1c092ae\x2dd373\x2d4c35\x2dad03\x2d7bc24e8ea6e5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 9 09:57:44.175199 kubelet[2565]: E0709 09:57:44.175133 2565 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 09:57:44.586278 sshd[4247]: Connection closed by 10.0.0.1 port 50952 Jul 9 09:57:44.588537 sshd-session[4244]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:44.596167 systemd[1]: sshd@22-10.0.0.36:22-10.0.0.1:50952.service: Deactivated successfully. Jul 9 09:57:44.597906 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 09:57:44.599646 systemd[1]: session-23.scope: Consumed 1.872s CPU time, 28.6M memory peak. Jul 9 09:57:44.600364 systemd-logind[1443]: Session 23 logged out. Waiting for processes to exit. Jul 9 09:57:44.610803 systemd[1]: Started sshd@23-10.0.0.36:22-10.0.0.1:40100.service - OpenSSH per-connection server daemon (10.0.0.1:40100). Jul 9 09:57:44.612257 systemd-logind[1443]: Removed session 23. Jul 9 09:57:44.658365 sshd[4405]: Accepted publickey for core from 10.0.0.1 port 40100 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:44.659696 sshd-session[4405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:44.664466 systemd-logind[1443]: New session 24 of user core. Jul 9 09:57:44.675734 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 09:57:45.132734 kubelet[2565]: E0709 09:57:45.132701 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:45.133208 kubelet[2565]: I0709 09:57:45.133177 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99b1e727-1a94-4d1a-9e1d-13bd95a7cf66" path="/var/lib/kubelet/pods/99b1e727-1a94-4d1a-9e1d-13bd95a7cf66/volumes" Jul 9 09:57:45.133602 kubelet[2565]: I0709 09:57:45.133584 2565 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f1c092ae-d373-4c35-ad03-7bc24e8ea6e5" path="/var/lib/kubelet/pods/f1c092ae-d373-4c35-ad03-7bc24e8ea6e5/volumes" Jul 9 09:57:45.861524 sshd[4408]: Connection closed by 10.0.0.1 port 40100 Jul 9 09:57:45.862050 sshd-session[4405]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:45.874164 systemd[1]: sshd@23-10.0.0.36:22-10.0.0.1:40100.service: Deactivated successfully. Jul 9 09:57:45.881859 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 09:57:45.882636 systemd[1]: session-24.scope: Consumed 1.108s CPU time, 25.3M memory peak. Jul 9 09:57:45.884663 systemd-logind[1443]: Session 24 logged out. Waiting for processes to exit. Jul 9 09:57:45.898939 systemd[1]: Started sshd@24-10.0.0.36:22-10.0.0.1:40106.service - OpenSSH per-connection server daemon (10.0.0.1:40106). Jul 9 09:57:45.902641 systemd-logind[1443]: Removed session 24. Jul 9 09:57:45.912706 systemd[1]: Created slice kubepods-burstable-pod43c0b3e0_9bdc_49eb_81d6_9ce3bd89abba.slice - libcontainer container kubepods-burstable-pod43c0b3e0_9bdc_49eb_81d6_9ce3bd89abba.slice. Jul 9 09:57:45.944537 sshd[4419]: Accepted publickey for core from 10.0.0.1 port 40106 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:45.945792 sshd-session[4419]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:45.949687 systemd-logind[1443]: New session 25 of user core. Jul 9 09:57:45.963160 kubelet[2565]: I0709 09:57:45.963114 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba-cni-path\") pod \"cilium-24xzl\" (UID: \"43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba\") " pod="kube-system/cilium-24xzl" Jul 9 09:57:45.963160 kubelet[2565]: I0709 09:57:45.963161 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba-host-proc-sys-kernel\") pod \"cilium-24xzl\" (UID: \"43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba\") " pod="kube-system/cilium-24xzl" Jul 9 09:57:45.963775 kubelet[2565]: I0709 09:57:45.963183 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba-lib-modules\") pod \"cilium-24xzl\" (UID: \"43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba\") " pod="kube-system/cilium-24xzl" Jul 9 09:57:45.963775 kubelet[2565]: I0709 09:57:45.963199 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba-xtables-lock\") pod \"cilium-24xzl\" (UID: \"43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba\") " pod="kube-system/cilium-24xzl" Jul 9 09:57:45.963775 kubelet[2565]: I0709 09:57:45.963216 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba-clustermesh-secrets\") pod \"cilium-24xzl\" (UID: \"43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba\") " pod="kube-system/cilium-24xzl" Jul 9 09:57:45.963775 kubelet[2565]: I0709 09:57:45.963232 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba-bpf-maps\") pod \"cilium-24xzl\" (UID: \"43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba\") " pod="kube-system/cilium-24xzl" Jul 9 09:57:45.963775 kubelet[2565]: I0709 09:57:45.963247 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba-etc-cni-netd\") pod \"cilium-24xzl\" (UID: \"43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba\") " pod="kube-system/cilium-24xzl" Jul 9 09:57:45.963775 kubelet[2565]: I0709 09:57:45.963262 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba-hostproc\") pod \"cilium-24xzl\" (UID: \"43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba\") " pod="kube-system/cilium-24xzl" Jul 9 09:57:45.963733 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 09:57:45.963961 kubelet[2565]: I0709 09:57:45.963277 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5g5q\" (UniqueName: \"kubernetes.io/projected/43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba-kube-api-access-k5g5q\") pod \"cilium-24xzl\" (UID: \"43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba\") " pod="kube-system/cilium-24xzl" Jul 9 09:57:45.963961 kubelet[2565]: I0709 09:57:45.963297 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba-cilium-run\") pod \"cilium-24xzl\" (UID: \"43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba\") " pod="kube-system/cilium-24xzl" Jul 9 09:57:45.963961 kubelet[2565]: I0709 09:57:45.963310 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba-cilium-ipsec-secrets\") pod \"cilium-24xzl\" (UID: \"43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba\") " pod="kube-system/cilium-24xzl" Jul 9 09:57:45.963961 kubelet[2565]: I0709 09:57:45.963325 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba-host-proc-sys-net\") pod \"cilium-24xzl\" (UID: \"43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba\") " pod="kube-system/cilium-24xzl" Jul 9 09:57:45.963961 kubelet[2565]: I0709 09:57:45.963338 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba-cilium-cgroup\") pod \"cilium-24xzl\" (UID: \"43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba\") " pod="kube-system/cilium-24xzl" Jul 9 09:57:45.964056 kubelet[2565]: I0709 09:57:45.963354 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba-cilium-config-path\") pod \"cilium-24xzl\" (UID: \"43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba\") " pod="kube-system/cilium-24xzl" Jul 9 09:57:45.964056 kubelet[2565]: I0709 09:57:45.963396 2565 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba-hubble-tls\") pod \"cilium-24xzl\" (UID: \"43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba\") " pod="kube-system/cilium-24xzl" Jul 9 09:57:46.013273 sshd[4422]: Connection closed by 10.0.0.1 port 40106 Jul 9 09:57:46.013752 sshd-session[4419]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:46.026381 systemd[1]: sshd@24-10.0.0.36:22-10.0.0.1:40106.service: Deactivated successfully. Jul 9 09:57:46.028138 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 09:57:46.030117 systemd-logind[1443]: Session 25 logged out. Waiting for processes to exit. Jul 9 09:57:46.036855 systemd[1]: Started sshd@25-10.0.0.36:22-10.0.0.1:40114.service - OpenSSH per-connection server daemon (10.0.0.1:40114). Jul 9 09:57:46.037441 systemd-logind[1443]: Removed session 25. Jul 9 09:57:46.077575 sshd[4428]: Accepted publickey for core from 10.0.0.1 port 40114 ssh2: RSA SHA256:cvlrIjRrKvYRpmTmS+h4CLEITKmMDgMzUbVMc8P6UWk Jul 9 09:57:46.078367 sshd-session[4428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 09:57:46.085491 systemd-logind[1443]: New session 26 of user core. Jul 9 09:57:46.095708 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 9 09:57:46.218376 kubelet[2565]: E0709 09:57:46.218334 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:46.218916 containerd[1455]: time="2025-07-09T09:57:46.218857107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24xzl,Uid:43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba,Namespace:kube-system,Attempt:0,}" Jul 9 09:57:46.236237 containerd[1455]: time="2025-07-09T09:57:46.236121979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 09:57:46.236237 containerd[1455]: time="2025-07-09T09:57:46.236181780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 09:57:46.236237 containerd[1455]: time="2025-07-09T09:57:46.236196261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:57:46.236441 containerd[1455]: time="2025-07-09T09:57:46.236270743Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 09:57:46.255764 systemd[1]: Started cri-containerd-ec47f97b0a424afc6ce5a7af6ce40eee391b3c60fa8d90ca2bed35ce61c6efd6.scope - libcontainer container ec47f97b0a424afc6ce5a7af6ce40eee391b3c60fa8d90ca2bed35ce61c6efd6. Jul 9 09:57:46.275347 containerd[1455]: time="2025-07-09T09:57:46.275136695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-24xzl,Uid:43c0b3e0-9bdc-49eb-81d6-9ce3bd89abba,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec47f97b0a424afc6ce5a7af6ce40eee391b3c60fa8d90ca2bed35ce61c6efd6\"" Jul 9 09:57:46.276142 kubelet[2565]: E0709 09:57:46.276031 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:46.299068 containerd[1455]: time="2025-07-09T09:57:46.299015403Z" level=info msg="CreateContainer within sandbox \"ec47f97b0a424afc6ce5a7af6ce40eee391b3c60fa8d90ca2bed35ce61c6efd6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 09:57:46.317310 containerd[1455]: time="2025-07-09T09:57:46.317255464Z" level=info msg="CreateContainer within sandbox \"ec47f97b0a424afc6ce5a7af6ce40eee391b3c60fa8d90ca2bed35ce61c6efd6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"806cef62020bb2880b48bb70a4d772ad74d895109210fd9493c4439cd6930925\"" Jul 9 09:57:46.317823 containerd[1455]: time="2025-07-09T09:57:46.317795960Z" level=info msg="StartContainer for \"806cef62020bb2880b48bb70a4d772ad74d895109210fd9493c4439cd6930925\"" Jul 9 09:57:46.343747 systemd[1]: Started cri-containerd-806cef62020bb2880b48bb70a4d772ad74d895109210fd9493c4439cd6930925.scope - libcontainer container 806cef62020bb2880b48bb70a4d772ad74d895109210fd9493c4439cd6930925. Jul 9 09:57:46.367240 containerd[1455]: time="2025-07-09T09:57:46.367103782Z" level=info msg="StartContainer for \"806cef62020bb2880b48bb70a4d772ad74d895109210fd9493c4439cd6930925\" returns successfully" Jul 9 09:57:46.381706 systemd[1]: cri-containerd-806cef62020bb2880b48bb70a4d772ad74d895109210fd9493c4439cd6930925.scope: Deactivated successfully. Jul 9 09:57:46.412395 containerd[1455]: time="2025-07-09T09:57:46.412217320Z" level=info msg="shim disconnected" id=806cef62020bb2880b48bb70a4d772ad74d895109210fd9493c4439cd6930925 namespace=k8s.io Jul 9 09:57:46.412395 containerd[1455]: time="2025-07-09T09:57:46.412345604Z" level=warning msg="cleaning up after shim disconnected" id=806cef62020bb2880b48bb70a4d772ad74d895109210fd9493c4439cd6930925 namespace=k8s.io Jul 9 09:57:46.412395 containerd[1455]: time="2025-07-09T09:57:46.412357804Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 09:57:47.132098 kubelet[2565]: E0709 09:57:47.132055 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:47.354124 kubelet[2565]: E0709 09:57:47.354084 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:47.358486 containerd[1455]: time="2025-07-09T09:57:47.358355152Z" level=info msg="CreateContainer within sandbox \"ec47f97b0a424afc6ce5a7af6ce40eee391b3c60fa8d90ca2bed35ce61c6efd6\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 09:57:47.369501 containerd[1455]: time="2025-07-09T09:57:47.369431793Z" level=info msg="CreateContainer within sandbox \"ec47f97b0a424afc6ce5a7af6ce40eee391b3c60fa8d90ca2bed35ce61c6efd6\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fd9367c3330aa4567df9a0996fc153d935ec759567980c54ea70cc9ee1994aaf\"" Jul 9 09:57:47.370422 containerd[1455]: time="2025-07-09T09:57:47.369949728Z" level=info msg="StartContainer for \"fd9367c3330aa4567df9a0996fc153d935ec759567980c54ea70cc9ee1994aaf\"" Jul 9 09:57:47.402772 systemd[1]: Started cri-containerd-fd9367c3330aa4567df9a0996fc153d935ec759567980c54ea70cc9ee1994aaf.scope - libcontainer container fd9367c3330aa4567df9a0996fc153d935ec759567980c54ea70cc9ee1994aaf. Jul 9 09:57:47.427455 containerd[1455]: time="2025-07-09T09:57:47.427408476Z" level=info msg="StartContainer for \"fd9367c3330aa4567df9a0996fc153d935ec759567980c54ea70cc9ee1994aaf\" returns successfully" Jul 9 09:57:47.432649 systemd[1]: cri-containerd-fd9367c3330aa4567df9a0996fc153d935ec759567980c54ea70cc9ee1994aaf.scope: Deactivated successfully. Jul 9 09:57:47.459255 containerd[1455]: time="2025-07-09T09:57:47.459194719Z" level=info msg="shim disconnected" id=fd9367c3330aa4567df9a0996fc153d935ec759567980c54ea70cc9ee1994aaf namespace=k8s.io Jul 9 09:57:47.459799 containerd[1455]: time="2025-07-09T09:57:47.459609851Z" level=warning msg="cleaning up after shim disconnected" id=fd9367c3330aa4567df9a0996fc153d935ec759567980c54ea70cc9ee1994aaf namespace=k8s.io Jul 9 09:57:47.459799 containerd[1455]: time="2025-07-09T09:57:47.459629972Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 09:57:48.068137 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd9367c3330aa4567df9a0996fc153d935ec759567980c54ea70cc9ee1994aaf-rootfs.mount: Deactivated successfully. Jul 9 09:57:48.357664 kubelet[2565]: E0709 09:57:48.357520 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:48.363140 containerd[1455]: time="2025-07-09T09:57:48.363103860Z" level=info msg="CreateContainer within sandbox \"ec47f97b0a424afc6ce5a7af6ce40eee391b3c60fa8d90ca2bed35ce61c6efd6\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 09:57:48.377003 containerd[1455]: time="2025-07-09T09:57:48.376966494Z" level=info msg="CreateContainer within sandbox \"ec47f97b0a424afc6ce5a7af6ce40eee391b3c60fa8d90ca2bed35ce61c6efd6\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"45b91aa678f58a3b4b8f6f2df09eb7f92ffdb77c2cefa36963cfcc0c9f110731\"" Jul 9 09:57:48.378068 containerd[1455]: time="2025-07-09T09:57:48.377570152Z" level=info msg="StartContainer for \"45b91aa678f58a3b4b8f6f2df09eb7f92ffdb77c2cefa36963cfcc0c9f110731\"" Jul 9 09:57:48.403758 systemd[1]: Started cri-containerd-45b91aa678f58a3b4b8f6f2df09eb7f92ffdb77c2cefa36963cfcc0c9f110731.scope - libcontainer container 45b91aa678f58a3b4b8f6f2df09eb7f92ffdb77c2cefa36963cfcc0c9f110731. Jul 9 09:57:48.428887 systemd[1]: cri-containerd-45b91aa678f58a3b4b8f6f2df09eb7f92ffdb77c2cefa36963cfcc0c9f110731.scope: Deactivated successfully. Jul 9 09:57:48.431613 containerd[1455]: time="2025-07-09T09:57:48.431101953Z" level=info msg="StartContainer for \"45b91aa678f58a3b4b8f6f2df09eb7f92ffdb77c2cefa36963cfcc0c9f110731\" returns successfully" Jul 9 09:57:48.453649 containerd[1455]: time="2025-07-09T09:57:48.453581152Z" level=info msg="shim disconnected" id=45b91aa678f58a3b4b8f6f2df09eb7f92ffdb77c2cefa36963cfcc0c9f110731 namespace=k8s.io Jul 9 09:57:48.453649 containerd[1455]: time="2025-07-09T09:57:48.453649594Z" level=warning msg="cleaning up after shim disconnected" id=45b91aa678f58a3b4b8f6f2df09eb7f92ffdb77c2cefa36963cfcc0c9f110731 namespace=k8s.io Jul 9 09:57:48.453840 containerd[1455]: time="2025-07-09T09:57:48.453657954Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 09:57:49.068280 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45b91aa678f58a3b4b8f6f2df09eb7f92ffdb77c2cefa36963cfcc0c9f110731-rootfs.mount: Deactivated successfully. Jul 9 09:57:49.176201 kubelet[2565]: E0709 09:57:49.176150 2565 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 09:57:49.361122 kubelet[2565]: E0709 09:57:49.360875 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:49.366303 containerd[1455]: time="2025-07-09T09:57:49.366255363Z" level=info msg="CreateContainer within sandbox \"ec47f97b0a424afc6ce5a7af6ce40eee391b3c60fa8d90ca2bed35ce61c6efd6\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 09:57:49.376661 containerd[1455]: time="2025-07-09T09:57:49.376615611Z" level=info msg="CreateContainer within sandbox \"ec47f97b0a424afc6ce5a7af6ce40eee391b3c60fa8d90ca2bed35ce61c6efd6\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d0a09f5f3b464cf9091508ca6d8631e1447eff03ead3717d546df0346c013673\"" Jul 9 09:57:49.378284 containerd[1455]: time="2025-07-09T09:57:49.378246137Z" level=info msg="StartContainer for \"d0a09f5f3b464cf9091508ca6d8631e1447eff03ead3717d546df0346c013673\"" Jul 9 09:57:49.410725 systemd[1]: Started cri-containerd-d0a09f5f3b464cf9091508ca6d8631e1447eff03ead3717d546df0346c013673.scope - libcontainer container d0a09f5f3b464cf9091508ca6d8631e1447eff03ead3717d546df0346c013673. Jul 9 09:57:49.449454 systemd[1]: cri-containerd-d0a09f5f3b464cf9091508ca6d8631e1447eff03ead3717d546df0346c013673.scope: Deactivated successfully. Jul 9 09:57:49.452026 containerd[1455]: time="2025-07-09T09:57:49.451979470Z" level=info msg="StartContainer for \"d0a09f5f3b464cf9091508ca6d8631e1447eff03ead3717d546df0346c013673\" returns successfully" Jul 9 09:57:49.475979 containerd[1455]: time="2025-07-09T09:57:49.475901296Z" level=info msg="shim disconnected" id=d0a09f5f3b464cf9091508ca6d8631e1447eff03ead3717d546df0346c013673 namespace=k8s.io Jul 9 09:57:49.475979 containerd[1455]: time="2025-07-09T09:57:49.475957697Z" level=warning msg="cleaning up after shim disconnected" id=d0a09f5f3b464cf9091508ca6d8631e1447eff03ead3717d546df0346c013673 namespace=k8s.io Jul 9 09:57:49.475979 containerd[1455]: time="2025-07-09T09:57:49.475965937Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 09:57:50.068420 systemd[1]: run-containerd-runc-k8s.io-d0a09f5f3b464cf9091508ca6d8631e1447eff03ead3717d546df0346c013673-runc.FXMZIG.mount: Deactivated successfully. Jul 9 09:57:50.068522 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0a09f5f3b464cf9091508ca6d8631e1447eff03ead3717d546df0346c013673-rootfs.mount: Deactivated successfully. Jul 9 09:57:50.132196 kubelet[2565]: E0709 09:57:50.132141 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:50.365825 kubelet[2565]: E0709 09:57:50.365174 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:50.372202 containerd[1455]: time="2025-07-09T09:57:50.372034917Z" level=info msg="CreateContainer within sandbox \"ec47f97b0a424afc6ce5a7af6ce40eee391b3c60fa8d90ca2bed35ce61c6efd6\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 09:57:50.385446 containerd[1455]: time="2025-07-09T09:57:50.385408081Z" level=info msg="CreateContainer within sandbox \"ec47f97b0a424afc6ce5a7af6ce40eee391b3c60fa8d90ca2bed35ce61c6efd6\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8996f1f0a2ee1911f0c0c7f6c3e93f447bde02804f68bc6a0e70198bdffce235\"" Jul 9 09:57:50.392213 containerd[1455]: time="2025-07-09T09:57:50.392156426Z" level=info msg="StartContainer for \"8996f1f0a2ee1911f0c0c7f6c3e93f447bde02804f68bc6a0e70198bdffce235\"" Jul 9 09:57:50.421709 systemd[1]: Started cri-containerd-8996f1f0a2ee1911f0c0c7f6c3e93f447bde02804f68bc6a0e70198bdffce235.scope - libcontainer container 8996f1f0a2ee1911f0c0c7f6c3e93f447bde02804f68bc6a0e70198bdffce235. Jul 9 09:57:50.447310 containerd[1455]: time="2025-07-09T09:57:50.447264049Z" level=info msg="StartContainer for \"8996f1f0a2ee1911f0c0c7f6c3e93f447bde02804f68bc6a0e70198bdffce235\" returns successfully" Jul 9 09:57:50.759596 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 9 09:57:51.234115 kubelet[2565]: I0709 09:57:51.234024 2565 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-09T09:57:51Z","lastTransitionTime":"2025-07-09T09:57:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 9 09:57:51.370215 kubelet[2565]: E0709 09:57:51.370126 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:51.385747 kubelet[2565]: I0709 09:57:51.385592 2565 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-24xzl" podStartSLOduration=6.385577036 podStartE2EDuration="6.385577036s" podCreationTimestamp="2025-07-09 09:57:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 09:57:51.384827496 +0000 UTC m=+82.356199871" watchObservedRunningTime="2025-07-09 09:57:51.385577036 +0000 UTC m=+82.356949451" Jul 9 09:57:52.372891 kubelet[2565]: E0709 09:57:52.372859 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:53.374444 kubelet[2565]: E0709 09:57:53.374294 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:53.642043 systemd-networkd[1392]: lxc_health: Link UP Jul 9 09:57:53.648381 systemd-networkd[1392]: lxc_health: Gained carrier Jul 9 09:57:54.376381 kubelet[2565]: E0709 09:57:54.376349 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:54.580081 systemd[1]: run-containerd-runc-k8s.io-8996f1f0a2ee1911f0c0c7f6c3e93f447bde02804f68bc6a0e70198bdffce235-runc.YxgKQw.mount: Deactivated successfully. Jul 9 09:57:55.252841 systemd-networkd[1392]: lxc_health: Gained IPv6LL Jul 9 09:57:55.377266 kubelet[2565]: E0709 09:57:55.376841 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:56.378269 kubelet[2565]: E0709 09:57:56.378213 2565 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 9 09:57:58.836596 systemd[1]: run-containerd-runc-k8s.io-8996f1f0a2ee1911f0c0c7f6c3e93f447bde02804f68bc6a0e70198bdffce235-runc.0eiAQr.mount: Deactivated successfully. Jul 9 09:57:58.893607 sshd[4435]: Connection closed by 10.0.0.1 port 40114 Jul 9 09:57:58.893576 sshd-session[4428]: pam_unix(sshd:session): session closed for user core Jul 9 09:57:58.897173 systemd[1]: sshd@25-10.0.0.36:22-10.0.0.1:40114.service: Deactivated successfully. Jul 9 09:57:58.901078 systemd[1]: session-26.scope: Deactivated successfully. Jul 9 09:57:58.902054 systemd-logind[1443]: Session 26 logged out. Waiting for processes to exit. Jul 9 09:57:58.905213 systemd-logind[1443]: Removed session 26.